2023-03-25 21:42:34,511 INFO [finetune.py:1046] (6/7) Training started 2023-03-25 21:42:34,511 INFO [finetune.py:1056] (6/7) Device: cuda:6 2023-03-25 21:42:34,514 INFO [finetune.py:1065] (6/7) {'frame_shift_ms': 10.0, 'allowed_excess_duration_ratio': 0.1, 'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.23.4', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '62e404dd3f3a811d73e424199b3408e309c06e1a', 'k2-git-date': 'Mon Jan 30 02:26:16 2023', 'lhotse-version': '1.12.0.dev+git.3ccfeb7.clean', 'torch-version': '1.13.0', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.8', 'icefall-git-branch': 'master', 'icefall-git-sha1': 'd74822d-dirty', 'icefall-git-date': 'Tue Mar 21 21:35:32 2023', 'icefall-path': '/home/lishaojie/icefall', 'k2-path': '/home/lishaojie/.conda/envs/env_lishaojie/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/home/lishaojie/.conda/envs/env_lishaojie/lib/python3.8/site-packages/lhotse/__init__.py', 'hostname': 'cnc533', 'IP address': '127.0.1.1'}, 'world_size': 7, 'master_port': 18181, 'tensorboard': True, 'num_epochs': 30, 'start_epoch': 1, 'start_batch': 0, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming/exp1'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'base_lr': 0.004, 'lr_batches': 100000.0, 'lr_epochs': 100.0, 'context_size': 2, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 2000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'do_finetune': True, 'init_modules': 'encoder', 'finetune_ckpt': '/home/lishaojie/icefall/egs/commonvoice/ASR/pruned_transducer_stateless7_streaming/exp/english_pretrain/epoch-30.pt', 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 200, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'input_strategy': 'PrecomputedFeatures', 'blank_id': 0, 'vocab_size': 500} 2023-03-25 21:42:34,514 INFO [finetune.py:1067] (6/7) About to create model 2023-03-25 21:42:34,854 INFO [zipformer.py:405] (6/7) At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. 2023-03-25 21:42:34,863 INFO [finetune.py:1071] (6/7) Number of model parameters: 70369391 2023-03-25 21:42:34,863 INFO [finetune.py:626] (6/7) Loading checkpoint from /home/lishaojie/icefall/egs/commonvoice/ASR/pruned_transducer_stateless7_streaming/exp/english_pretrain/epoch-30.pt 2023-03-25 21:42:35,412 INFO [finetune.py:647] (6/7) Loading parameters starting with prefix encoder 2023-03-25 21:42:36,830 INFO [finetune.py:1093] (6/7) Using DDP 2023-03-25 21:42:37,685 INFO [commonvoice_fr.py:392] (6/7) About to get train cuts 2023-03-25 21:42:37,687 INFO [commonvoice_fr.py:218] (6/7) Enable MUSAN 2023-03-25 21:42:37,687 INFO [commonvoice_fr.py:219] (6/7) About to get Musan cuts 2023-03-25 21:42:39,415 INFO [commonvoice_fr.py:243] (6/7) Enable SpecAugment 2023-03-25 21:42:39,415 INFO [commonvoice_fr.py:244] (6/7) Time warp factor: 80 2023-03-25 21:42:39,415 INFO [commonvoice_fr.py:254] (6/7) Num frame mask: 10 2023-03-25 21:42:39,415 INFO [commonvoice_fr.py:267] (6/7) About to create train dataset 2023-03-25 21:42:39,416 INFO [commonvoice_fr.py:294] (6/7) Using DynamicBucketingSampler. 2023-03-25 21:42:42,296 INFO [commonvoice_fr.py:309] (6/7) About to create train dataloader 2023-03-25 21:42:42,297 INFO [commonvoice_fr.py:399] (6/7) About to get dev cuts 2023-03-25 21:42:42,299 INFO [commonvoice_fr.py:340] (6/7) About to create dev dataset 2023-03-25 21:42:42,714 INFO [commonvoice_fr.py:357] (6/7) About to create dev dataloader 2023-03-25 21:42:42,714 INFO [finetune.py:1289] (6/7) Sanity check -- see if any of the batches in epoch 1 would cause OOM. 2023-03-25 21:46:46,134 INFO [finetune.py:1317] (6/7) Maximum memory allocated so far is 5162MB 2023-03-25 21:46:46,826 INFO [finetune.py:1317] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 21:46:48,914 INFO [finetune.py:1317] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 21:46:49,576 INFO [finetune.py:1317] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 21:46:50,267 INFO [finetune.py:1317] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 21:46:50,962 INFO [finetune.py:1317] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 21:46:59,847 INFO [finetune.py:976] (6/7) Epoch 1, batch 0, loss[loss=7.597, simple_loss=6.898, pruned_loss=6.979, over 4896.00 frames. ], tot_loss[loss=7.597, simple_loss=6.898, pruned_loss=6.979, over 4896.00 frames. ], batch size: 43, lr: 2.00e-03, grad_scale: 2.0 2023-03-25 21:46:59,847 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-25 21:47:09,862 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8084, 1.6454, 1.9442, 1.1996, 1.6506, 1.9148, 1.6136, 2.2087], device='cuda:6'), covar=tensor([0.0604, 0.0991, 0.0583, 0.0824, 0.0520, 0.0626, 0.1255, 0.0348], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0216, 0.0210, 0.0195, 0.0173, 0.0219, 0.0219, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 21:47:12,622 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2317, 1.5505, 1.7652, 0.7580, 1.2494, 1.6163, 1.7805, 1.5846], device='cuda:6'), covar=tensor([0.0673, 0.0258, 0.0204, 0.0385, 0.0310, 0.0495, 0.0173, 0.0378], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0176, 0.0133, 0.0143, 0.0148, 0.0145, 0.0168, 0.0183], device='cuda:6'), out_proj_covar=tensor([1.1281e-04, 1.3142e-04, 9.7380e-05, 1.0433e-04, 1.0803e-04, 1.0854e-04, 1.2654e-04, 1.3707e-04], device='cuda:6') 2023-03-25 21:47:14,953 INFO [finetune.py:1010] (6/7) Epoch 1, validation: loss=7.294, simple_loss=6.606, pruned_loss=6.863, over 2265189.00 frames. 2023-03-25 21:47:14,953 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 21:47:19,870 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:47:30,299 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:47:50,603 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-25 21:48:00,211 INFO [finetune.py:976] (6/7) Epoch 1, batch 50, loss[loss=2.624, simple_loss=2.472, pruned_loss=1.499, over 4818.00 frames. ], tot_loss[loss=4.253, simple_loss=3.818, pruned_loss=4.18, over 215907.26 frames. ], batch size: 40, lr: 2.20e-03, grad_scale: 0.000244140625 2023-03-25 21:48:33,002 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:48:53,452 WARNING [finetune.py:966] (6/7) Grad scale is small: 0.000244140625 2023-03-25 21:48:53,452 INFO [finetune.py:976] (6/7) Epoch 1, batch 100, loss[loss=2.125, simple_loss=2.016, pruned_loss=1.104, over 4713.00 frames. ], tot_loss[loss=3.478, simple_loss=3.204, pruned_loss=2.667, over 380576.87 frames. ], batch size: 23, lr: 2.40e-03, grad_scale: 0.00048828125 2023-03-25 21:49:13,202 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.539e+02 2.791e+03 6.484e+03 1.700e+04 1.722e+07, threshold=1.297e+04, percent-clipped=0.0 2023-03-25 21:49:17,077 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=3.27 vs. limit=2.0 2023-03-25 21:49:22,762 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=97.81 vs. limit=5.0 2023-03-25 21:49:25,605 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=3.30 vs. limit=2.0 2023-03-25 21:49:28,886 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=144.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:49:37,442 INFO [finetune.py:976] (6/7) Epoch 1, batch 150, loss[loss=1.751, simple_loss=1.583, pruned_loss=1.355, over 4789.00 frames. ], tot_loss[loss=2.875, simple_loss=2.657, pruned_loss=2.084, over 506739.63 frames. ], batch size: 29, lr: 2.60e-03, grad_scale: 0.00048828125 2023-03-25 21:50:11,682 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=15.55 vs. limit=5.0 2023-03-25 21:50:15,718 WARNING [finetune.py:966] (6/7) Grad scale is small: 0.00048828125 2023-03-25 21:50:15,718 INFO [finetune.py:976] (6/7) Epoch 1, batch 200, loss[loss=1.429, simple_loss=1.233, pruned_loss=1.353, over 4912.00 frames. ], tot_loss[loss=2.373, simple_loss=2.172, pruned_loss=1.794, over 608186.56 frames. ], batch size: 37, lr: 2.80e-03, grad_scale: 0.0009765625 2023-03-25 21:50:18,090 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=12.43 vs. limit=5.0 2023-03-25 21:50:29,335 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 2.018e+02 7.406e+02 1.293e+03 3.197e+03 6.754e+04, threshold=2.586e+03, percent-clipped=12.0 2023-03-25 21:50:54,580 INFO [finetune.py:976] (6/7) Epoch 1, batch 250, loss[loss=1.475, simple_loss=1.256, pruned_loss=1.396, over 4828.00 frames. ], tot_loss[loss=2.063, simple_loss=1.865, pruned_loss=1.634, over 685285.30 frames. ], batch size: 40, lr: 3.00e-03, grad_scale: 0.0009765625 2023-03-25 21:51:17,272 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.70 vs. limit=2.0 2023-03-25 21:51:43,806 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=296.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:51:45,821 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=300.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 21:51:46,256 WARNING [finetune.py:966] (6/7) Grad scale is small: 0.0009765625 2023-03-25 21:51:46,257 INFO [finetune.py:976] (6/7) Epoch 1, batch 300, loss[loss=1.321, simple_loss=1.11, pruned_loss=1.252, over 4817.00 frames. ], tot_loss[loss=1.852, simple_loss=1.653, pruned_loss=1.522, over 744714.84 frames. ], batch size: 30, lr: 3.20e-03, grad_scale: 0.001953125 2023-03-25 21:51:58,579 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 2.075e+01 5.781e+01 1.827e+02 5.788e+02 1.230e+04, threshold=3.655e+02, percent-clipped=4.0 2023-03-25 21:52:39,148 INFO [finetune.py:976] (6/7) Epoch 1, batch 350, loss[loss=1.222, simple_loss=1.013, pruned_loss=1.156, over 4768.00 frames. ], tot_loss[loss=1.704, simple_loss=1.501, pruned_loss=1.442, over 793167.54 frames. ], batch size: 27, lr: 3.40e-03, grad_scale: 0.001953125 2023-03-25 21:52:47,118 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=357.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 21:52:51,767 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=2.30 vs. limit=2.0 2023-03-25 21:53:14,573 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=387.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:53:27,703 WARNING [finetune.py:966] (6/7) Grad scale is small: 0.001953125 2023-03-25 21:53:27,703 INFO [finetune.py:976] (6/7) Epoch 1, batch 400, loss[loss=1.232, simple_loss=1.005, pruned_loss=1.169, over 4888.00 frames. ], tot_loss[loss=1.588, simple_loss=1.381, pruned_loss=1.374, over 829412.97 frames. ], batch size: 35, lr: 3.60e-03, grad_scale: 0.00390625 2023-03-25 21:53:29,100 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.51 vs. limit=2.0 2023-03-25 21:53:39,885 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.702e+01 2.277e+01 3.517e+01 1.113e+02 1.032e+03, threshold=7.035e+01, percent-clipped=3.0 2023-03-25 21:53:51,258 WARNING [optim.py:389] (6/7) Scaling gradients by 0.06621765345335007, model_norm_threshold=70.34587860107422 2023-03-25 21:53:51,344 INFO [optim.py:451] (6/7) Parameter Dominanting tot_sumsq module.encoder.encoder_embed.conv.0.weight with proportion 0.67, where dominant_sumsq=(grad_sumsq*orig_rms_sq)=7.539e+05, grad_sumsq = 2.933e+06, orig_rms_sq=2.571e-01 2023-03-25 21:54:00,696 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=439.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 21:54:05,302 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=448.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:54:06,748 INFO [finetune.py:976] (6/7) Epoch 1, batch 450, loss[loss=1.099, simple_loss=0.8858, pruned_loss=1.033, over 4821.00 frames. ], tot_loss[loss=1.478, simple_loss=1.269, pruned_loss=1.301, over 856009.54 frames. ], batch size: 39, lr: 3.80e-03, grad_scale: 0.00390625 2023-03-25 21:54:29,380 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=49.05 vs. limit=5.0 2023-03-25 21:54:43,455 WARNING [finetune.py:966] (6/7) Grad scale is small: 0.00390625 2023-03-25 21:54:43,455 INFO [finetune.py:976] (6/7) Epoch 1, batch 500, loss[loss=0.9597, simple_loss=0.7648, pruned_loss=0.8935, over 4754.00 frames. ], tot_loss[loss=1.376, simple_loss=1.166, pruned_loss=1.224, over 878526.69 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 0.0078125 2023-03-25 21:54:52,763 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=3.62 vs. limit=2.0 2023-03-25 21:54:53,596 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3514, 1.3106, 0.7851, 1.3959, 1.4239, 1.3924, 1.3717, 0.8233], device='cuda:6'), covar=tensor([0.0039, 0.0053, 0.0085, 0.0064, 0.0048, 0.0056, 0.0034, 0.0076], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0128, 0.0158, 0.0123, 0.0119, 0.0125, 0.0103, 0.0130], device='cuda:6'), out_proj_covar=tensor([7.7799e-05, 1.0109e-04, 1.2883e-04, 9.7426e-05, 9.4837e-05, 9.5014e-05, 7.9090e-05, 1.0205e-04], device='cuda:6') 2023-03-25 21:54:57,596 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.430e+01 1.676e+01 1.950e+01 4.114e+01 1.062e+03, threshold=3.899e+01, percent-clipped=11.0 2023-03-25 21:55:07,426 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0975, 1.0777, 1.1543, 2.4482, 1.5641, 2.3891, 0.4326, 1.6387], device='cuda:6'), covar=tensor([0.1434, 0.2074, 0.1464, 0.1524, 0.1076, 0.1624, 0.1674, 0.1355], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0120, 0.0138, 0.0159, 0.0109, 0.0145, 0.0130, 0.0116], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-25 21:55:17,650 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=14.19 vs. limit=5.0 2023-03-25 21:55:28,672 INFO [finetune.py:976] (6/7) Epoch 1, batch 550, loss[loss=0.8102, simple_loss=0.638, pruned_loss=0.7476, over 4059.00 frames. ], tot_loss[loss=1.287, simple_loss=1.079, pruned_loss=1.152, over 895480.33 frames. ], batch size: 17, lr: 4.00e-03, grad_scale: 0.0078125 2023-03-25 21:55:39,567 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=562.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:55:39,639 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.67 vs. limit=2.0 2023-03-25 21:55:42,634 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=568.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:56:03,087 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=590.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:56:09,552 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=600.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 21:56:09,983 WARNING [finetune.py:966] (6/7) Grad scale is small: 0.0078125 2023-03-25 21:56:09,983 INFO [finetune.py:976] (6/7) Epoch 1, batch 600, loss[loss=1.02, simple_loss=0.7978, pruned_loss=0.9244, over 4902.00 frames. ], tot_loss[loss=1.222, simple_loss=1.013, pruned_loss=1.096, over 909524.03 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 0.015625 2023-03-25 21:56:22,611 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.472e+01 1.758e+01 2.024e+01 2.271e+01 8.528e+01, threshold=4.048e+01, percent-clipped=5.0 2023-03-25 21:56:32,939 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=623.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:56:36,132 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=629.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 21:56:46,363 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.84 vs. limit=2.0 2023-03-25 21:56:54,361 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=648.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:56:55,842 INFO [finetune.py:976] (6/7) Epoch 1, batch 650, loss[loss=1.122, simple_loss=0.8773, pruned_loss=0.9871, over 4816.00 frames. ], tot_loss[loss=1.182, simple_loss=0.9681, pruned_loss=1.06, over 920504.45 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 0.015625 2023-03-25 21:56:55,933 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=651.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 21:56:56,433 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=652.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:57:01,897 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.14 vs. limit=2.0 2023-03-25 21:57:31,099 INFO [finetune.py:976] (6/7) Epoch 1, batch 700, loss[loss=0.9855, simple_loss=0.7567, pruned_loss=0.8699, over 4821.00 frames. ], tot_loss[loss=1.146, simple_loss=0.9281, pruned_loss=1.024, over 927109.38 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 0.03125 2023-03-25 21:57:38,290 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.810e+01 2.037e+01 2.232e+01 2.628e+01 5.516e+01, threshold=4.463e+01, percent-clipped=4.0 2023-03-25 21:57:58,788 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.55 vs. limit=2.0 2023-03-25 21:57:59,235 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=739.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 21:58:01,231 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=743.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 21:58:05,816 INFO [finetune.py:976] (6/7) Epoch 1, batch 750, loss[loss=1.069, simple_loss=0.8198, pruned_loss=0.9212, over 4806.00 frames. ], tot_loss[loss=1.114, simple_loss=0.892, pruned_loss=0.9897, over 933844.99 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 0.03125 2023-03-25 21:58:29,193 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=787.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 21:58:31,747 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1694, 1.5609, 2.0331, 0.8953, 2.3219, 2.4378, 1.6217, 1.8853], device='cuda:6'), covar=tensor([0.0100, 0.0211, 0.0254, 0.0346, 0.0225, 0.0178, 0.0292, 0.0201], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0188, 0.0209, 0.0188, 0.0210, 0.0204, 0.0218, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 21:58:36,508 INFO [finetune.py:976] (6/7) Epoch 1, batch 800, loss[loss=0.9454, simple_loss=0.7161, pruned_loss=0.8094, over 4930.00 frames. ], tot_loss[loss=1.085, simple_loss=0.8604, pruned_loss=0.9579, over 938962.89 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 0.0625 2023-03-25 21:58:45,189 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 2.059e+01 2.266e+01 2.508e+01 2.744e+01 4.199e+01, threshold=5.016e+01, percent-clipped=0.0 2023-03-25 21:58:45,307 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0247, 3.1199, 1.2996, 1.8906, 3.6048, 2.6535, 1.6853, 2.3268], device='cuda:6'), covar=tensor([0.0122, 0.0070, 0.0219, 0.0112, 0.0061, 0.0100, 0.0150, 0.0156], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0132, 0.0130, 0.0119, 0.0107, 0.0129, 0.0135, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 21:59:20,569 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=847.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 21:59:22,540 INFO [finetune.py:976] (6/7) Epoch 1, batch 850, loss[loss=0.8987, simple_loss=0.6753, pruned_loss=0.7601, over 4827.00 frames. ], tot_loss[loss=1.055, simple_loss=0.8281, pruned_loss=0.9236, over 943395.91 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 0.0625 2023-03-25 21:59:39,151 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=13.18 vs. limit=5.0 2023-03-25 21:59:40,596 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=26.46 vs. limit=5.0 2023-03-25 22:00:12,170 INFO [finetune.py:976] (6/7) Epoch 1, batch 900, loss[loss=0.9021, simple_loss=0.67, pruned_loss=0.7577, over 4808.00 frames. ], tot_loss[loss=1.025, simple_loss=0.7975, pruned_loss=0.89, over 946144.39 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 0.125 2023-03-25 22:00:16,293 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=908.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:00:25,565 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 2.101e+01 2.406e+01 2.575e+01 3.027e+01 5.726e+01, threshold=5.150e+01, percent-clipped=1.0 2023-03-25 22:00:28,811 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=918.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 22:00:32,446 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=924.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:00:33,517 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=29.89 vs. limit=5.0 2023-03-25 22:00:53,647 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=946.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:00:56,166 INFO [finetune.py:976] (6/7) Epoch 1, batch 950, loss[loss=1.04, simple_loss=0.7693, pruned_loss=0.8587, over 4868.00 frames. ], tot_loss[loss=1.007, simple_loss=0.7764, pruned_loss=0.8662, over 949338.58 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 0.125 2023-03-25 22:00:56,741 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=952.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:01:43,865 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1000.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:01:44,323 INFO [finetune.py:976] (6/7) Epoch 1, batch 1000, loss[loss=1.019, simple_loss=0.7478, pruned_loss=0.8324, over 4805.00 frames. ], tot_loss[loss=1.009, simple_loss=0.7701, pruned_loss=0.8587, over 952276.63 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 0.25 2023-03-25 22:01:45,013 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-25 22:01:58,293 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 2.382e+01 2.890e+01 3.153e+01 3.664e+01 7.462e+01, threshold=6.306e+01, percent-clipped=2.0 2023-03-25 22:02:12,687 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=31.78 vs. limit=5.0 2023-03-25 22:02:15,826 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=17.18 vs. limit=5.0 2023-03-25 22:02:18,031 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.92 vs. limit=5.0 2023-03-25 22:02:21,749 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1043.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:02:31,286 INFO [finetune.py:976] (6/7) Epoch 1, batch 1050, loss[loss=1.043, simple_loss=0.7638, pruned_loss=0.8378, over 4863.00 frames. ], tot_loss[loss=1.006, simple_loss=0.7611, pruned_loss=0.8479, over 950750.18 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 0.25 2023-03-25 22:03:07,641 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1091.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:03:18,242 INFO [finetune.py:976] (6/7) Epoch 1, batch 1100, loss[loss=1.068, simple_loss=0.7758, pruned_loss=0.8489, over 4928.00 frames. ], tot_loss[loss=1.007, simple_loss=0.7555, pruned_loss=0.8385, over 953531.62 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 0.5 2023-03-25 22:03:30,769 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 2.698e+01 3.337e+01 3.640e+01 4.251e+01 7.174e+01, threshold=7.279e+01, percent-clipped=4.0 2023-03-25 22:04:04,831 INFO [finetune.py:976] (6/7) Epoch 1, batch 1150, loss[loss=0.9863, simple_loss=0.7188, pruned_loss=0.7675, over 4854.00 frames. ], tot_loss[loss=1.005, simple_loss=0.7492, pruned_loss=0.8265, over 952413.72 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 0.5 2023-03-25 22:04:06,000 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1153.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:04:16,574 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=20.20 vs. limit=5.0 2023-03-25 22:04:16,828 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.78 vs. limit=5.0 2023-03-25 22:04:41,643 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=14.89 vs. limit=5.0 2023-03-25 22:04:46,445 INFO [finetune.py:976] (6/7) Epoch 1, batch 1200, loss[loss=0.9571, simple_loss=0.7029, pruned_loss=0.7265, over 4923.00 frames. ], tot_loss[loss=0.9908, simple_loss=0.7354, pruned_loss=0.8036, over 953459.49 frames. ], batch size: 46, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:04:47,541 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1203.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:04:59,184 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 3.248e+01 4.460e+01 5.563e+01 6.854e+01 1.013e+02, threshold=1.113e+02, percent-clipped=20.0 2023-03-25 22:04:59,379 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1214.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:05:01,658 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1218.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:05:05,853 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-25 22:05:06,570 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1224.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:05:24,111 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1246.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:05:26,685 INFO [finetune.py:976] (6/7) Epoch 1, batch 1250, loss[loss=0.9111, simple_loss=0.6689, pruned_loss=0.6809, over 4907.00 frames. ], tot_loss[loss=0.9678, simple_loss=0.7171, pruned_loss=0.7732, over 953326.40 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:05:46,032 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1266.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:05:49,133 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1272.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:06:08,481 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1294.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:06:14,930 INFO [finetune.py:976] (6/7) Epoch 1, batch 1300, loss[loss=0.8809, simple_loss=0.6541, pruned_loss=0.6413, over 4825.00 frames. ], tot_loss[loss=0.9404, simple_loss=0.6969, pruned_loss=0.7391, over 953764.39 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:06:23,506 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 5.599e+01 8.403e+01 9.999e+01 1.262e+02 2.600e+02, threshold=2.000e+02, percent-clipped=40.0 2023-03-25 22:06:51,099 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.25 vs. limit=5.0 2023-03-25 22:06:57,097 INFO [finetune.py:976] (6/7) Epoch 1, batch 1350, loss[loss=0.8989, simple_loss=0.679, pruned_loss=0.6347, over 4868.00 frames. ], tot_loss[loss=0.92, simple_loss=0.6836, pruned_loss=0.7096, over 954903.18 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:07:11,426 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=6.62 vs. limit=5.0 2023-03-25 22:07:50,278 INFO [finetune.py:976] (6/7) Epoch 1, batch 1400, loss[loss=0.8902, simple_loss=0.6788, pruned_loss=0.6149, over 4811.00 frames. ], tot_loss[loss=0.9082, simple_loss=0.6777, pruned_loss=0.6865, over 955520.71 frames. ], batch size: 45, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:07:56,021 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.12 vs. limit=5.0 2023-03-25 22:07:58,271 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.277e+01 1.400e+02 1.610e+02 1.980e+02 2.974e+02, threshold=3.221e+02, percent-clipped=23.0 2023-03-25 22:08:09,180 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1434.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:08:20,137 INFO [finetune.py:976] (6/7) Epoch 1, batch 1450, loss[loss=0.831, simple_loss=0.6327, pruned_loss=0.5682, over 4919.00 frames. ], tot_loss[loss=0.8855, simple_loss=0.6647, pruned_loss=0.6562, over 955517.49 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:08:47,666 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1495.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:08:51,787 INFO [finetune.py:976] (6/7) Epoch 1, batch 1500, loss[loss=0.7562, simple_loss=0.5943, pruned_loss=0.4964, over 4926.00 frames. ], tot_loss[loss=0.8581, simple_loss=0.6491, pruned_loss=0.6228, over 956323.16 frames. ], batch size: 42, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:08:52,983 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1503.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:09:02,831 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1509.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:09:05,945 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.968e+01 1.844e+02 2.293e+02 2.711e+02 4.587e+02, threshold=4.586e+02, percent-clipped=13.0 2023-03-25 22:09:06,397 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-25 22:09:13,430 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-25 22:09:38,505 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-25 22:09:42,480 INFO [finetune.py:976] (6/7) Epoch 1, batch 1550, loss[loss=0.6782, simple_loss=0.537, pruned_loss=0.4379, over 4922.00 frames. ], tot_loss[loss=0.8257, simple_loss=0.6306, pruned_loss=0.5868, over 956911.87 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 1.0 2023-03-25 22:09:42,538 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1551.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:09:52,836 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1566.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:10:24,512 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1331, 2.4465, 2.4572, 0.8826, 2.5668, 2.9432, 2.2265, 2.1623], device='cuda:6'), covar=tensor([0.0630, 0.0219, 0.0141, 0.0590, 0.0196, 0.0089, 0.0231, 0.0278], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0158, 0.0120, 0.0130, 0.0134, 0.0129, 0.0154, 0.0164], device='cuda:6'), out_proj_covar=tensor([1.0182e-04, 1.1828e-04, 8.8177e-05, 9.5251e-05, 9.7569e-05, 9.6272e-05, 1.1540e-04, 1.2264e-04], device='cuda:6') 2023-03-25 22:10:33,771 INFO [finetune.py:976] (6/7) Epoch 1, batch 1600, loss[loss=0.5813, simple_loss=0.4731, pruned_loss=0.3627, over 4819.00 frames. ], tot_loss[loss=0.7862, simple_loss=0.6065, pruned_loss=0.5474, over 956160.03 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:10:40,900 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1611.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:10:42,504 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6094, 1.2817, 0.8727, 1.5655, 1.8616, 1.3844, 1.2556, 1.6243], device='cuda:6'), covar=tensor([0.1868, 0.2143, 0.2100, 0.1235, 0.3092, 0.2294, 0.1528, 0.1738], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0099, 0.0115, 0.0096, 0.0126, 0.0093, 0.0100, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:10:42,943 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.367e+02 1.965e+02 2.441e+02 2.819e+02 5.041e+02, threshold=4.882e+02, percent-clipped=1.0 2023-03-25 22:10:55,951 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1627.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:11:18,720 INFO [finetune.py:976] (6/7) Epoch 1, batch 1650, loss[loss=0.6383, simple_loss=0.5183, pruned_loss=0.3962, over 4827.00 frames. ], tot_loss[loss=0.7513, simple_loss=0.5857, pruned_loss=0.5126, over 956248.91 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:11:41,971 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1672.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:11:43,106 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=15.41 vs. limit=5.0 2023-03-25 22:11:47,982 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6359, 0.5484, 0.6008, 0.6016, 0.7097, 0.6598, 0.7471, 0.5248], device='cuda:6'), covar=tensor([11.5761, 22.6897, 14.8760, 17.4788, 10.9470, 9.1930, 7.8829, 18.9877], device='cuda:6'), in_proj_covar=tensor([0.0179, 0.0209, 0.0246, 0.0272, 0.0228, 0.0191, 0.0195, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:12:02,515 INFO [finetune.py:976] (6/7) Epoch 1, batch 1700, loss[loss=0.6756, simple_loss=0.5556, pruned_loss=0.4117, over 4938.00 frames. ], tot_loss[loss=0.7191, simple_loss=0.5665, pruned_loss=0.481, over 956610.27 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:12:14,552 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.227e+02 2.187e+02 2.736e+02 3.197e+02 8.210e+02, threshold=5.471e+02, percent-clipped=2.0 2023-03-25 22:12:53,680 INFO [finetune.py:976] (6/7) Epoch 1, batch 1750, loss[loss=0.7112, simple_loss=0.5972, pruned_loss=0.4228, over 4851.00 frames. ], tot_loss[loss=0.6982, simple_loss=0.5564, pruned_loss=0.4574, over 958047.12 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:12:55,039 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.67 vs. limit=5.0 2023-03-25 22:13:29,980 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1790.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 22:13:36,051 INFO [finetune.py:976] (6/7) Epoch 1, batch 1800, loss[loss=0.6327, simple_loss=0.5389, pruned_loss=0.3696, over 4891.00 frames. ], tot_loss[loss=0.6826, simple_loss=0.5502, pruned_loss=0.4383, over 957117.21 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:13:40,494 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=1809.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:13:43,024 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 2.215e+02 2.629e+02 3.291e+02 5.990e+02, threshold=5.258e+02, percent-clipped=1.0 2023-03-25 22:13:59,220 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1838.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:14:06,690 INFO [finetune.py:976] (6/7) Epoch 1, batch 1850, loss[loss=0.624, simple_loss=0.5419, pruned_loss=0.3568, over 4918.00 frames. ], tot_loss[loss=0.6654, simple_loss=0.5425, pruned_loss=0.4193, over 956960.27 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:14:10,081 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=1857.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:14:10,134 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1857.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:14:12,903 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2573, 1.1767, 1.5919, 2.3003, 1.6435, 1.9932, 0.8749, 1.8350], device='cuda:6'), covar=tensor([0.2049, 0.1693, 0.1089, 0.0537, 0.1002, 0.1114, 0.1732, 0.0960], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0115, 0.0129, 0.0150, 0.0103, 0.0137, 0.0122, 0.0107], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:14:17,816 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=1870.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:14:51,505 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1899.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:14:52,519 INFO [finetune.py:976] (6/7) Epoch 1, batch 1900, loss[loss=0.6338, simple_loss=0.5235, pruned_loss=0.3758, over 4812.00 frames. ], tot_loss[loss=0.6502, simple_loss=0.5356, pruned_loss=0.4028, over 955815.26 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:15:01,595 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-25 22:15:03,951 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.381e+02 2.208e+02 2.560e+02 3.227e+02 6.450e+02, threshold=5.121e+02, percent-clipped=1.0 2023-03-25 22:15:11,554 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1918.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:15:14,235 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1922.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 22:15:18,611 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0955, 1.6289, 2.3924, 1.3127, 2.0237, 2.3177, 1.5169, 2.5241], device='cuda:6'), covar=tensor([0.1936, 0.2398, 0.1386, 0.2512, 0.1229, 0.1551, 0.3231, 0.1108], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0197, 0.0190, 0.0178, 0.0156, 0.0196, 0.0203, 0.0176], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:15:20,180 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=1931.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:15:33,999 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0159, 1.4945, 1.3323, 1.1193, 1.7481, 2.2112, 1.9179, 1.3540], device='cuda:6'), covar=tensor([0.0101, 0.0290, 0.0490, 0.0318, 0.0177, 0.0144, 0.0208, 0.0289], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0120, 0.0146, 0.0116, 0.0110, 0.0113, 0.0094, 0.0119], device='cuda:6'), out_proj_covar=tensor([7.1434e-05, 9.4631e-05, 1.1952e-04, 9.2121e-05, 8.7739e-05, 8.5079e-05, 7.2964e-05, 9.3665e-05], device='cuda:6') 2023-03-25 22:15:37,064 INFO [finetune.py:976] (6/7) Epoch 1, batch 1950, loss[loss=0.541, simple_loss=0.4769, pruned_loss=0.3033, over 4868.00 frames. ], tot_loss[loss=0.6297, simple_loss=0.524, pruned_loss=0.3839, over 956141.43 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 2.0 2023-03-25 22:15:42,180 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8320, 1.1868, 2.0719, 1.2512, 1.9443, 2.0099, 1.2944, 2.3505], device='cuda:6'), covar=tensor([0.1599, 0.2079, 0.1207, 0.1922, 0.0939, 0.1054, 0.2715, 0.0734], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0197, 0.0189, 0.0177, 0.0156, 0.0196, 0.0203, 0.0176], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:15:46,016 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=1967.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:16:03,926 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.48 vs. limit=5.0 2023-03-25 22:16:12,902 INFO [finetune.py:976] (6/7) Epoch 1, batch 2000, loss[loss=0.5234, simple_loss=0.4526, pruned_loss=0.2971, over 4721.00 frames. ], tot_loss[loss=0.6088, simple_loss=0.5113, pruned_loss=0.3658, over 957491.41 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 4.0 2023-03-25 22:16:22,850 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.403e+02 2.183e+02 2.758e+02 3.285e+02 7.843e+02, threshold=5.515e+02, percent-clipped=1.0 2023-03-25 22:16:57,264 INFO [finetune.py:976] (6/7) Epoch 1, batch 2050, loss[loss=0.4276, simple_loss=0.3982, pruned_loss=0.2286, over 4891.00 frames. ], tot_loss[loss=0.5832, simple_loss=0.4957, pruned_loss=0.3452, over 958360.19 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:17:07,933 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-25 22:17:31,396 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2090.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:17:41,736 INFO [finetune.py:976] (6/7) Epoch 1, batch 2100, loss[loss=0.4486, simple_loss=0.4165, pruned_loss=0.2404, over 4752.00 frames. ], tot_loss[loss=0.5655, simple_loss=0.4865, pruned_loss=0.3299, over 956570.33 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:17:55,062 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.336e+02 2.022e+02 2.484e+02 2.961e+02 6.695e+02, threshold=4.968e+02, percent-clipped=1.0 2023-03-25 22:18:12,541 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2138.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:18:29,605 INFO [finetune.py:976] (6/7) Epoch 1, batch 2150, loss[loss=0.582, simple_loss=0.5138, pruned_loss=0.3251, over 4826.00 frames. ], tot_loss[loss=0.556, simple_loss=0.4835, pruned_loss=0.3203, over 955275.92 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:18:34,281 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.57 vs. limit=2.0 2023-03-25 22:19:03,510 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2194.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:19:08,497 INFO [finetune.py:976] (6/7) Epoch 1, batch 2200, loss[loss=0.4317, simple_loss=0.4091, pruned_loss=0.2272, over 4751.00 frames. ], tot_loss[loss=0.545, simple_loss=0.4795, pruned_loss=0.3099, over 955150.45 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:19:17,019 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2213.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:19:17,475 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.568e+02 2.355e+02 2.819e+02 3.325e+02 5.172e+02, threshold=5.637e+02, percent-clipped=1.0 2023-03-25 22:19:22,622 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2222.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:19:28,125 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2226.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:19:57,056 INFO [finetune.py:976] (6/7) Epoch 1, batch 2250, loss[loss=0.491, simple_loss=0.4624, pruned_loss=0.2598, over 4918.00 frames. ], tot_loss[loss=0.5324, simple_loss=0.4735, pruned_loss=0.2993, over 953586.44 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:20:18,381 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2267.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:20:20,123 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2270.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:21:00,799 INFO [finetune.py:976] (6/7) Epoch 1, batch 2300, loss[loss=0.5133, simple_loss=0.4715, pruned_loss=0.2775, over 4917.00 frames. ], tot_loss[loss=0.5227, simple_loss=0.469, pruned_loss=0.291, over 953363.54 frames. ], batch size: 42, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:21:15,881 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.417e+02 2.050e+02 2.425e+02 2.921e+02 4.362e+02, threshold=4.850e+02, percent-clipped=0.0 2023-03-25 22:21:22,459 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2315.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:21:42,809 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2340.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:21:54,801 INFO [finetune.py:976] (6/7) Epoch 1, batch 2350, loss[loss=0.3903, simple_loss=0.3843, pruned_loss=0.1981, over 4856.00 frames. ], tot_loss[loss=0.5055, simple_loss=0.4583, pruned_loss=0.2786, over 955119.23 frames. ], batch size: 49, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:22:16,958 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1580, 0.6842, 1.1284, 0.8713, 0.8751, 0.8369, 0.8949, 0.9535], device='cuda:6'), covar=tensor([11.7958, 23.5261, 10.9599, 16.9570, 20.8620, 10.7058, 26.5635, 10.6233], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0253, 0.0242, 0.0274, 0.0263, 0.0222, 0.0291, 0.0215], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 22:22:57,121 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-25 22:22:57,441 INFO [finetune.py:976] (6/7) Epoch 1, batch 2400, loss[loss=0.4591, simple_loss=0.4231, pruned_loss=0.2475, over 4832.00 frames. ], tot_loss[loss=0.4923, simple_loss=0.4498, pruned_loss=0.2691, over 957084.90 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:22:57,553 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2401.0, num_to_drop=2, layers_to_drop={1, 3} 2023-03-25 22:23:00,311 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.8070, 0.7751, 0.7800, 1.0965, 1.0838, 0.8206, 1.4643, 0.7415], device='cuda:6'), covar=tensor([ 5.4462, 11.7839, 6.5674, 9.0796, 4.8648, 3.7729, 3.9914, 8.7830], device='cuda:6'), in_proj_covar=tensor([0.0159, 0.0187, 0.0222, 0.0243, 0.0203, 0.0172, 0.0177, 0.0182], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002], device='cuda:6') 2023-03-25 22:23:01,149 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-25 22:23:05,863 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.408e+02 1.953e+02 2.427e+02 2.971e+02 6.309e+02, threshold=4.853e+02, percent-clipped=1.0 2023-03-25 22:23:32,010 INFO [finetune.py:976] (6/7) Epoch 1, batch 2450, loss[loss=0.4582, simple_loss=0.4291, pruned_loss=0.2436, over 4826.00 frames. ], tot_loss[loss=0.4781, simple_loss=0.4407, pruned_loss=0.2591, over 956086.78 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:23:44,053 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.63 vs. limit=5.0 2023-03-25 22:24:21,704 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2494.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 22:24:22,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6839, 3.9725, 4.0306, 1.8523, 4.2096, 3.0965, 1.0247, 2.7661], device='cuda:6'), covar=tensor([0.1912, 0.1080, 0.1104, 0.2681, 0.0713, 0.0646, 0.3554, 0.1102], device='cuda:6'), in_proj_covar=tensor([0.0145, 0.0146, 0.0153, 0.0120, 0.0143, 0.0109, 0.0134, 0.0112], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:24:25,635 INFO [finetune.py:976] (6/7) Epoch 1, batch 2500, loss[loss=0.4601, simple_loss=0.417, pruned_loss=0.2516, over 4215.00 frames. ], tot_loss[loss=0.4714, simple_loss=0.4375, pruned_loss=0.2536, over 954514.80 frames. ], batch size: 18, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:24:31,188 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-25 22:24:34,898 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2513.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:24:35,370 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.353e+02 2.241e+02 2.593e+02 3.079e+02 4.323e+02, threshold=5.185e+02, percent-clipped=0.0 2023-03-25 22:24:44,963 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2526.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:24:45,560 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2024, 1.8804, 2.3725, 3.7262, 2.8662, 2.4019, 0.6124, 3.0889], device='cuda:6'), covar=tensor([0.1701, 0.1479, 0.1273, 0.0437, 0.0808, 0.1654, 0.2256, 0.0654], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0112, 0.0127, 0.0146, 0.0100, 0.0135, 0.0119, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:24:54,543 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2542.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:24:54,605 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7210, 1.4672, 1.6848, 1.6529, 2.1591, 1.5656, 1.3212, 1.3056], device='cuda:6'), covar=tensor([0.3105, 0.4165, 0.2803, 0.2811, 0.3265, 0.2391, 0.5427, 0.2754], device='cuda:6'), in_proj_covar=tensor([0.0221, 0.0205, 0.0193, 0.0179, 0.0229, 0.0178, 0.0202, 0.0180], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:25:00,194 INFO [finetune.py:976] (6/7) Epoch 1, batch 2550, loss[loss=0.4408, simple_loss=0.437, pruned_loss=0.2223, over 4898.00 frames. ], tot_loss[loss=0.4696, simple_loss=0.4391, pruned_loss=0.2508, over 953794.48 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:25:09,532 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-25 22:25:09,686 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2561.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:25:18,671 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=2574.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:25:29,554 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0641, 0.8310, 0.6928, 0.7561, 0.8208, 0.7403, 0.7474, 1.3276], device='cuda:6'), covar=tensor([17.4693, 24.7385, 17.3296, 30.0021, 16.9773, 11.9229, 26.9638, 6.3059], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0228, 0.0203, 0.0263, 0.0220, 0.0189, 0.0228, 0.0169], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 22:25:48,350 INFO [finetune.py:976] (6/7) Epoch 1, batch 2600, loss[loss=0.488, simple_loss=0.4615, pruned_loss=0.2572, over 4754.00 frames. ], tot_loss[loss=0.4659, simple_loss=0.4382, pruned_loss=0.2474, over 951277.00 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:25:55,822 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.651e+02 2.205e+02 2.587e+02 2.996e+02 4.228e+02, threshold=5.174e+02, percent-clipped=0.0 2023-03-25 22:26:08,208 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4211, 1.5952, 1.3805, 1.5216, 0.9016, 2.9276, 0.9130, 1.4749], device='cuda:6'), covar=tensor([0.3814, 0.2419, 0.2400, 0.2529, 0.2346, 0.0280, 0.2927, 0.1543], device='cuda:6'), in_proj_covar=tensor([0.0115, 0.0099, 0.0107, 0.0104, 0.0095, 0.0084, 0.0081, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0004, 0.0004, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-25 22:26:19,956 INFO [finetune.py:976] (6/7) Epoch 1, batch 2650, loss[loss=0.4444, simple_loss=0.4483, pruned_loss=0.2203, over 4905.00 frames. ], tot_loss[loss=0.4592, simple_loss=0.4356, pruned_loss=0.2419, over 951934.15 frames. ], batch size: 46, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:26:22,869 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3574, 1.2316, 1.5279, 2.3531, 1.7333, 1.9831, 0.9646, 1.8661], device='cuda:6'), covar=tensor([0.1754, 0.1637, 0.1128, 0.0605, 0.0945, 0.1185, 0.1554, 0.0838], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0113, 0.0128, 0.0147, 0.0100, 0.0136, 0.0120, 0.0104], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:26:54,457 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2685.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:27:06,872 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=2696.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 22:27:15,134 INFO [finetune.py:976] (6/7) Epoch 1, batch 2700, loss[loss=0.4089, simple_loss=0.4028, pruned_loss=0.2075, over 4820.00 frames. ], tot_loss[loss=0.4514, simple_loss=0.431, pruned_loss=0.2363, over 952428.80 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:27:28,252 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.346e+02 2.127e+02 2.493e+02 3.058e+02 5.200e+02, threshold=4.985e+02, percent-clipped=1.0 2023-03-25 22:28:14,551 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2746.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:28:17,274 INFO [finetune.py:976] (6/7) Epoch 1, batch 2750, loss[loss=0.4083, simple_loss=0.412, pruned_loss=0.2023, over 4819.00 frames. ], tot_loss[loss=0.4411, simple_loss=0.424, pruned_loss=0.2294, over 953421.15 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:28:30,476 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=6.14 vs. limit=5.0 2023-03-25 22:28:58,716 INFO [finetune.py:976] (6/7) Epoch 1, batch 2800, loss[loss=0.3208, simple_loss=0.3209, pruned_loss=0.1604, over 4156.00 frames. ], tot_loss[loss=0.4308, simple_loss=0.416, pruned_loss=0.223, over 953820.66 frames. ], batch size: 18, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:29:06,642 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.489e+02 2.264e+02 2.537e+02 3.001e+02 5.007e+02, threshold=5.073e+02, percent-clipped=1.0 2023-03-25 22:29:12,568 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2824.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:29:19,576 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-25 22:29:36,509 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-25 22:29:40,283 INFO [finetune.py:976] (6/7) Epoch 1, batch 2850, loss[loss=0.4016, simple_loss=0.3678, pruned_loss=0.2177, over 3975.00 frames. ], tot_loss[loss=0.4246, simple_loss=0.412, pruned_loss=0.2188, over 952920.26 frames. ], batch size: 17, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:29:57,114 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-25 22:30:03,461 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2885.0, num_to_drop=2, layers_to_drop={0, 2} 2023-03-25 22:30:15,599 INFO [finetune.py:976] (6/7) Epoch 1, batch 2900, loss[loss=0.4252, simple_loss=0.4217, pruned_loss=0.2144, over 4122.00 frames. ], tot_loss[loss=0.4277, simple_loss=0.4164, pruned_loss=0.2197, over 954205.16 frames. ], batch size: 65, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:30:23,165 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.426e+02 2.100e+02 2.461e+02 2.914e+02 6.574e+02, threshold=4.923e+02, percent-clipped=3.0 2023-03-25 22:30:37,170 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=2937.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:30:48,833 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.05 vs. limit=5.0 2023-03-25 22:30:50,913 INFO [finetune.py:976] (6/7) Epoch 1, batch 2950, loss[loss=0.4315, simple_loss=0.4276, pruned_loss=0.2177, over 4849.00 frames. ], tot_loss[loss=0.4285, simple_loss=0.4193, pruned_loss=0.219, over 955976.99 frames. ], batch size: 47, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:31:04,001 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-25 22:31:06,707 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=7.21 vs. limit=5.0 2023-03-25 22:31:31,735 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=2996.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:31:33,402 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=2998.0, num_to_drop=2, layers_to_drop={0, 3} 2023-03-25 22:31:35,184 INFO [finetune.py:976] (6/7) Epoch 1, batch 3000, loss[loss=0.4279, simple_loss=0.4294, pruned_loss=0.2132, over 4922.00 frames. ], tot_loss[loss=0.4293, simple_loss=0.4204, pruned_loss=0.2191, over 953728.48 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:31:35,184 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-25 22:31:56,383 INFO [finetune.py:1010] (6/7) Epoch 1, validation: loss=0.4228, simple_loss=0.4589, pruned_loss=0.1933, over 2265189.00 frames. 2023-03-25 22:31:56,383 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 5675MB 2023-03-25 22:32:17,071 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.364e+02 2.092e+02 2.490e+02 2.940e+02 5.162e+02, threshold=4.980e+02, percent-clipped=2.0 2023-03-25 22:32:25,823 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3019.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:32:39,271 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3041.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:32:40,987 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3044.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:32:44,999 INFO [finetune.py:976] (6/7) Epoch 1, batch 3050, loss[loss=0.427, simple_loss=0.4286, pruned_loss=0.2127, over 4889.00 frames. ], tot_loss[loss=0.4246, simple_loss=0.4181, pruned_loss=0.2157, over 955010.27 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:33:09,822 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3080.0, num_to_drop=2, layers_to_drop={1, 2} 2023-03-25 22:33:38,271 INFO [finetune.py:976] (6/7) Epoch 1, batch 3100, loss[loss=0.3667, simple_loss=0.3703, pruned_loss=0.1816, over 4815.00 frames. ], tot_loss[loss=0.416, simple_loss=0.4116, pruned_loss=0.2103, over 956291.08 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:33:51,987 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.371e+02 2.009e+02 2.458e+02 3.052e+02 4.298e+02, threshold=4.916e+02, percent-clipped=0.0 2023-03-25 22:34:39,802 INFO [finetune.py:976] (6/7) Epoch 1, batch 3150, loss[loss=0.4166, simple_loss=0.4082, pruned_loss=0.2125, over 4860.00 frames. ], tot_loss[loss=0.4093, simple_loss=0.4063, pruned_loss=0.2062, over 954651.69 frames. ], batch size: 49, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:34:39,916 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3286, 1.8592, 1.9791, 0.9767, 2.0645, 1.8317, 1.3688, 2.0667], device='cuda:6'), covar=tensor([0.0632, 0.1074, 0.1207, 0.2299, 0.0936, 0.1634, 0.1808, 0.0987], device='cuda:6'), in_proj_covar=tensor([0.0147, 0.0162, 0.0176, 0.0162, 0.0178, 0.0177, 0.0186, 0.0173], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:34:48,439 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0747, 2.5233, 1.7473, 1.5445, 2.9755, 2.7524, 2.2021, 2.2702], device='cuda:6'), covar=tensor([0.0936, 0.0588, 0.1052, 0.1158, 0.0394, 0.0811, 0.0954, 0.1045], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0131, 0.0131, 0.0120, 0.0107, 0.0130, 0.0136, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:35:16,525 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3180.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:35:39,618 INFO [finetune.py:976] (6/7) Epoch 1, batch 3200, loss[loss=0.3342, simple_loss=0.3518, pruned_loss=0.1584, over 4805.00 frames. ], tot_loss[loss=0.4001, simple_loss=0.3991, pruned_loss=0.2005, over 955352.50 frames. ], batch size: 29, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:35:52,730 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.973e+02 2.320e+02 2.787e+02 5.091e+02, threshold=4.641e+02, percent-clipped=1.0 2023-03-25 22:36:24,738 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.00 vs. limit=2.0 2023-03-25 22:36:29,142 INFO [finetune.py:976] (6/7) Epoch 1, batch 3250, loss[loss=0.3983, simple_loss=0.3977, pruned_loss=0.1994, over 4762.00 frames. ], tot_loss[loss=0.3968, simple_loss=0.3969, pruned_loss=0.1984, over 952023.03 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:36:49,967 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.41 vs. limit=5.0 2023-03-25 22:37:12,285 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3288.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:37:19,995 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3293.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:37:22,937 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3893, 1.5647, 1.4314, 1.5856, 0.9219, 3.2117, 1.1541, 1.7831], device='cuda:6'), covar=tensor([0.3913, 0.2598, 0.2296, 0.2285, 0.2398, 0.0212, 0.2983, 0.1621], device='cuda:6'), in_proj_covar=tensor([0.0118, 0.0102, 0.0109, 0.0107, 0.0099, 0.0086, 0.0085, 0.0084], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:6') 2023-03-25 22:37:24,555 INFO [finetune.py:976] (6/7) Epoch 1, batch 3300, loss[loss=0.4134, simple_loss=0.4235, pruned_loss=0.2016, over 4922.00 frames. ], tot_loss[loss=0.399, simple_loss=0.401, pruned_loss=0.1985, over 953608.06 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:37:32,703 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.553e+02 2.210e+02 2.512e+02 3.057e+02 4.555e+02, threshold=5.024e+02, percent-clipped=0.0 2023-03-25 22:38:03,246 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3341.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:38:13,338 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3349.0, num_to_drop=2, layers_to_drop={0, 1} 2023-03-25 22:38:14,397 INFO [finetune.py:976] (6/7) Epoch 1, batch 3350, loss[loss=0.4461, simple_loss=0.4411, pruned_loss=0.2256, over 4816.00 frames. ], tot_loss[loss=0.3993, simple_loss=0.4023, pruned_loss=0.1981, over 952136.02 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:38:25,350 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-25 22:38:44,440 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3375.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:38:59,534 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3389.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:39:06,748 INFO [finetune.py:976] (6/7) Epoch 1, batch 3400, loss[loss=0.4477, simple_loss=0.4419, pruned_loss=0.2267, over 4894.00 frames. ], tot_loss[loss=0.3981, simple_loss=0.4027, pruned_loss=0.1968, over 953478.97 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:39:14,342 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5727, 1.4324, 1.7742, 1.1252, 1.3737, 1.6696, 1.4326, 1.8648], device='cuda:6'), covar=tensor([0.1694, 0.1943, 0.1207, 0.1758, 0.1093, 0.1278, 0.2422, 0.0975], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0196, 0.0192, 0.0181, 0.0163, 0.0204, 0.0202, 0.0181], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:39:20,793 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.307e+02 1.988e+02 2.392e+02 2.720e+02 4.202e+02, threshold=4.784e+02, percent-clipped=0.0 2023-03-25 22:39:57,339 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.82 vs. limit=5.0 2023-03-25 22:39:57,875 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3344, 1.5620, 1.2759, 1.0535, 1.7163, 2.8183, 2.2793, 1.4896], device='cuda:6'), covar=tensor([0.0171, 0.0634, 0.0645, 0.0688, 0.0376, 0.0194, 0.0207, 0.0586], device='cuda:6'), in_proj_covar=tensor([0.0086, 0.0115, 0.0135, 0.0113, 0.0106, 0.0103, 0.0088, 0.0114], device='cuda:6'), out_proj_covar=tensor([6.7762e-05, 9.0742e-05, 1.0987e-04, 8.9619e-05, 8.4318e-05, 7.7200e-05, 6.8526e-05, 8.9452e-05], device='cuda:6') 2023-03-25 22:40:08,843 INFO [finetune.py:976] (6/7) Epoch 1, batch 3450, loss[loss=0.3953, simple_loss=0.4004, pruned_loss=0.1951, over 4815.00 frames. ], tot_loss[loss=0.3955, simple_loss=0.401, pruned_loss=0.195, over 953856.73 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:40:40,391 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3480.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:41:03,279 INFO [finetune.py:976] (6/7) Epoch 1, batch 3500, loss[loss=0.3547, simple_loss=0.3708, pruned_loss=0.1693, over 4827.00 frames. ], tot_loss[loss=0.3935, simple_loss=0.3983, pruned_loss=0.1944, over 953643.25 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:41:12,654 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6423, 1.5404, 1.9850, 3.0692, 2.2180, 2.2431, 0.8954, 2.3998], device='cuda:6'), covar=tensor([0.1607, 0.1458, 0.1200, 0.0462, 0.0760, 0.1510, 0.1805, 0.0669], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0113, 0.0130, 0.0150, 0.0101, 0.0138, 0.0121, 0.0104], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:41:14,914 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.301e+02 2.271e+02 2.832e+02 3.824e+02 1.123e+03, threshold=5.664e+02, percent-clipped=12.0 2023-03-25 22:41:30,605 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3528.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:41:48,796 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3546.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:41:57,307 INFO [finetune.py:976] (6/7) Epoch 1, batch 3550, loss[loss=0.3782, simple_loss=0.3795, pruned_loss=0.1885, over 4756.00 frames. ], tot_loss[loss=0.3898, simple_loss=0.3945, pruned_loss=0.1926, over 952050.91 frames. ], batch size: 54, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:42:45,308 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3593.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:42:50,745 INFO [finetune.py:976] (6/7) Epoch 1, batch 3600, loss[loss=0.4168, simple_loss=0.4266, pruned_loss=0.2035, over 4916.00 frames. ], tot_loss[loss=0.3847, simple_loss=0.3902, pruned_loss=0.1896, over 955016.59 frames. ], batch size: 37, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:42:59,647 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3607.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:43:09,363 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.842e+02 2.557e+02 2.866e+02 3.769e+02 9.044e+02, threshold=5.732e+02, percent-clipped=5.0 2023-03-25 22:43:42,993 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7252, 2.1387, 2.3078, 1.0998, 2.3854, 2.0245, 1.5832, 2.1850], device='cuda:6'), covar=tensor([0.1643, 0.1208, 0.1749, 0.3481, 0.1549, 0.2052, 0.2468, 0.1520], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0165, 0.0180, 0.0165, 0.0183, 0.0182, 0.0189, 0.0176], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:43:44,104 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3641.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:43:46,917 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3644.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:43:52,546 INFO [finetune.py:976] (6/7) Epoch 1, batch 3650, loss[loss=0.2862, simple_loss=0.3202, pruned_loss=0.1261, over 4758.00 frames. ], tot_loss[loss=0.3866, simple_loss=0.3923, pruned_loss=0.1904, over 954685.39 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:44:19,755 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6075, 1.2719, 1.4843, 1.5857, 2.1693, 1.5394, 1.2118, 1.2360], device='cuda:6'), covar=tensor([0.3387, 0.3782, 0.2785, 0.2985, 0.2953, 0.2099, 0.4885, 0.2789], device='cuda:6'), in_proj_covar=tensor([0.0212, 0.0197, 0.0183, 0.0171, 0.0217, 0.0170, 0.0194, 0.0172], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:44:20,325 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3675.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:44:21,477 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3676.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:44:40,051 INFO [finetune.py:976] (6/7) Epoch 1, batch 3700, loss[loss=0.3436, simple_loss=0.3595, pruned_loss=0.1639, over 4803.00 frames. ], tot_loss[loss=0.3886, simple_loss=0.3958, pruned_loss=0.1907, over 956535.17 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:44:52,809 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.548e+02 2.567e+02 2.980e+02 3.536e+02 5.905e+02, threshold=5.959e+02, percent-clipped=1.0 2023-03-25 22:45:09,754 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3723.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:45:23,635 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3737.0, num_to_drop=2, layers_to_drop={2, 3} 2023-03-25 22:45:43,730 INFO [finetune.py:976] (6/7) Epoch 1, batch 3750, loss[loss=0.4145, simple_loss=0.4317, pruned_loss=0.1986, over 4807.00 frames. ], tot_loss[loss=0.3873, simple_loss=0.3961, pruned_loss=0.1892, over 957260.93 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:46:00,213 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5205, 1.5447, 1.7022, 1.9446, 1.8774, 4.0409, 1.3548, 1.8878], device='cuda:6'), covar=tensor([0.1199, 0.1864, 0.1448, 0.1255, 0.1643, 0.0234, 0.1755, 0.1923], device='cuda:6'), in_proj_covar=tensor([0.0073, 0.0076, 0.0071, 0.0074, 0.0088, 0.0075, 0.0082, 0.0075], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-25 22:46:07,945 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2180, 3.6451, 3.7315, 4.0893, 3.9108, 3.7190, 4.3225, 1.3821], device='cuda:6'), covar=tensor([0.0848, 0.0832, 0.0743, 0.0908, 0.1404, 0.1351, 0.0719, 0.5035], device='cuda:6'), in_proj_covar=tensor([0.0370, 0.0244, 0.0264, 0.0294, 0.0347, 0.0288, 0.0311, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:46:33,630 INFO [finetune.py:976] (6/7) Epoch 1, batch 3800, loss[loss=0.3857, simple_loss=0.3853, pruned_loss=0.193, over 4724.00 frames. ], tot_loss[loss=0.3841, simple_loss=0.3946, pruned_loss=0.1868, over 956154.30 frames. ], batch size: 54, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:46:47,078 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.580e+02 2.141e+02 2.900e+02 3.620e+02 1.043e+03, threshold=5.800e+02, percent-clipped=4.0 2023-03-25 22:46:59,296 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-25 22:47:22,185 INFO [finetune.py:976] (6/7) Epoch 1, batch 3850, loss[loss=0.3234, simple_loss=0.3568, pruned_loss=0.145, over 4914.00 frames. ], tot_loss[loss=0.378, simple_loss=0.3903, pruned_loss=0.1829, over 955308.89 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:47:41,985 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4603, 1.5535, 1.4332, 1.4741, 1.0180, 3.3986, 1.2555, 1.9046], device='cuda:6'), covar=tensor([0.4705, 0.3399, 0.2708, 0.3048, 0.2330, 0.0260, 0.3023, 0.1620], device='cuda:6'), in_proj_covar=tensor([0.0119, 0.0102, 0.0109, 0.0108, 0.0100, 0.0088, 0.0088, 0.0085], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0004, 0.0005, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:6') 2023-03-25 22:48:03,595 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4191, 1.3582, 0.9943, 1.5876, 1.7434, 1.1582, 2.1596, 1.3326], device='cuda:6'), covar=tensor([0.6961, 1.3362, 1.1589, 1.3810, 0.7653, 0.6190, 0.8173, 1.0242], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0182, 0.0221, 0.0234, 0.0196, 0.0167, 0.0174, 0.0177], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0001, 0.0001], device='cuda:6') 2023-03-25 22:48:09,989 INFO [finetune.py:976] (6/7) Epoch 1, batch 3900, loss[loss=0.3402, simple_loss=0.362, pruned_loss=0.1592, over 4822.00 frames. ], tot_loss[loss=0.3725, simple_loss=0.3854, pruned_loss=0.1798, over 956051.03 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:48:10,641 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=3902.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:48:20,243 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.580e+02 2.264e+02 2.673e+02 3.196e+02 5.181e+02, threshold=5.346e+02, percent-clipped=0.0 2023-03-25 22:48:26,755 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-25 22:48:30,357 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=3926.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:48:50,594 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=3944.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:48:54,666 INFO [finetune.py:976] (6/7) Epoch 1, batch 3950, loss[loss=0.3151, simple_loss=0.3371, pruned_loss=0.1465, over 4748.00 frames. ], tot_loss[loss=0.3659, simple_loss=0.3794, pruned_loss=0.1762, over 954366.64 frames. ], batch size: 27, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:49:21,715 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3734, 2.0928, 1.9243, 1.0397, 1.8444, 2.0339, 1.6910, 2.0830], device='cuda:6'), covar=tensor([0.0718, 0.0957, 0.1513, 0.2530, 0.1202, 0.1808, 0.2239, 0.0793], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0167, 0.0182, 0.0167, 0.0186, 0.0185, 0.0191, 0.0178], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:49:45,851 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=3987.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:49:53,767 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=3992.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:50:05,835 INFO [finetune.py:976] (6/7) Epoch 1, batch 4000, loss[loss=0.2802, simple_loss=0.3212, pruned_loss=0.1196, over 4768.00 frames. ], tot_loss[loss=0.3585, simple_loss=0.3737, pruned_loss=0.1716, over 953316.81 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 8.0 2023-03-25 22:50:16,117 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6388, 1.5235, 1.7976, 3.0264, 2.2306, 2.1688, 0.7045, 2.3945], device='cuda:6'), covar=tensor([0.1845, 0.1624, 0.1333, 0.0528, 0.0824, 0.1432, 0.2093, 0.0734], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0131, 0.0152, 0.0101, 0.0139, 0.0123, 0.0105], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:50:18,328 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.331e+02 2.072e+02 2.562e+02 2.941e+02 5.028e+02, threshold=5.123e+02, percent-clipped=0.0 2023-03-25 22:50:31,350 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4032.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:50:50,290 INFO [finetune.py:976] (6/7) Epoch 1, batch 4050, loss[loss=0.3535, simple_loss=0.3872, pruned_loss=0.1599, over 4828.00 frames. ], tot_loss[loss=0.3609, simple_loss=0.3769, pruned_loss=0.1725, over 953710.70 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:51:46,311 INFO [finetune.py:976] (6/7) Epoch 1, batch 4100, loss[loss=0.3767, simple_loss=0.4038, pruned_loss=0.1748, over 4928.00 frames. ], tot_loss[loss=0.3633, simple_loss=0.3805, pruned_loss=0.173, over 955599.69 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:52:00,029 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.481e+02 2.010e+02 2.495e+02 2.957e+02 5.246e+02, threshold=4.990e+02, percent-clipped=1.0 2023-03-25 22:52:34,840 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=4141.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:52:43,039 INFO [finetune.py:976] (6/7) Epoch 1, batch 4150, loss[loss=0.3368, simple_loss=0.3752, pruned_loss=0.1492, over 4812.00 frames. ], tot_loss[loss=0.3621, simple_loss=0.3806, pruned_loss=0.1718, over 956887.92 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:53:34,984 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-25 22:53:43,099 INFO [finetune.py:976] (6/7) Epoch 1, batch 4200, loss[loss=0.303, simple_loss=0.3451, pruned_loss=0.1304, over 4891.00 frames. ], tot_loss[loss=0.3587, simple_loss=0.3787, pruned_loss=0.1694, over 956804.78 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:53:43,818 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4202.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:53:43,841 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=4202.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:54:02,774 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.292e+02 2.089e+02 2.432e+02 2.936e+02 5.530e+02, threshold=4.864e+02, percent-clipped=1.0 2023-03-25 22:54:40,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5152, 1.3703, 1.8089, 2.8564, 2.0409, 2.1683, 0.9886, 2.2165], device='cuda:6'), covar=tensor([0.1907, 0.1769, 0.1341, 0.0598, 0.0941, 0.1344, 0.1817, 0.0832], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0154, 0.0103, 0.0140, 0.0124, 0.0106], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-25 22:54:44,488 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4250.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:54:45,013 INFO [finetune.py:976] (6/7) Epoch 1, batch 4250, loss[loss=0.3951, simple_loss=0.3923, pruned_loss=0.1989, over 4880.00 frames. ], tot_loss[loss=0.3537, simple_loss=0.3742, pruned_loss=0.1666, over 957878.13 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:55:23,648 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4282.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:55:27,291 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=4288.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:55:35,146 INFO [finetune.py:976] (6/7) Epoch 1, batch 4300, loss[loss=0.3048, simple_loss=0.3289, pruned_loss=0.1403, over 4851.00 frames. ], tot_loss[loss=0.3473, simple_loss=0.3685, pruned_loss=0.1631, over 958345.85 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:55:45,448 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.425e+02 1.950e+02 2.267e+02 2.860e+02 4.056e+02, threshold=4.534e+02, percent-clipped=0.0 2023-03-25 22:56:13,124 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.89 vs. limit=5.0 2023-03-25 22:56:13,428 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4332.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 22:56:26,403 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4844, 1.2857, 1.3998, 1.5393, 2.1041, 1.3777, 1.1759, 1.0716], device='cuda:6'), covar=tensor([0.2695, 0.2905, 0.2397, 0.2172, 0.2394, 0.1744, 0.3725, 0.2162], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0192, 0.0179, 0.0166, 0.0213, 0.0164, 0.0190, 0.0168], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 22:56:35,749 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=4349.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:56:36,386 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0592, 0.6991, 0.9714, 0.7217, 0.7651, 0.7207, 0.7228, 0.8649], device='cuda:6'), covar=tensor([4.0914, 8.1630, 4.8360, 6.9470, 7.5318, 4.7659, 8.8711, 4.7575], device='cuda:6'), in_proj_covar=tensor([0.0201, 0.0227, 0.0213, 0.0242, 0.0225, 0.0197, 0.0251, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 22:56:36,850 INFO [finetune.py:976] (6/7) Epoch 1, batch 4350, loss[loss=0.3455, simple_loss=0.3589, pruned_loss=0.166, over 4718.00 frames. ], tot_loss[loss=0.341, simple_loss=0.3629, pruned_loss=0.1596, over 958942.32 frames. ], batch size: 59, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:57:06,610 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-25 22:57:17,559 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4380.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 22:57:44,940 INFO [finetune.py:976] (6/7) Epoch 1, batch 4400, loss[loss=0.2776, simple_loss=0.3124, pruned_loss=0.1214, over 4807.00 frames. ], tot_loss[loss=0.3423, simple_loss=0.3641, pruned_loss=0.1603, over 958464.30 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:57:57,029 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.978e+02 2.430e+02 2.895e+02 4.966e+02, threshold=4.860e+02, percent-clipped=1.0 2023-03-25 22:58:28,113 INFO [finetune.py:976] (6/7) Epoch 1, batch 4450, loss[loss=0.3148, simple_loss=0.3604, pruned_loss=0.1346, over 4795.00 frames. ], tot_loss[loss=0.3458, simple_loss=0.3681, pruned_loss=0.1617, over 958834.06 frames. ], batch size: 29, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:58:28,652 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-25 22:59:09,844 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4497.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 22:59:12,227 INFO [finetune.py:976] (6/7) Epoch 1, batch 4500, loss[loss=0.3942, simple_loss=0.4057, pruned_loss=0.1914, over 4796.00 frames. ], tot_loss[loss=0.3477, simple_loss=0.3709, pruned_loss=0.1623, over 957890.28 frames. ], batch size: 51, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 22:59:29,323 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.227e+02 2.111e+02 2.516e+02 2.889e+02 5.762e+02, threshold=5.032e+02, percent-clipped=1.0 2023-03-25 23:00:15,076 INFO [finetune.py:976] (6/7) Epoch 1, batch 4550, loss[loss=0.3509, simple_loss=0.382, pruned_loss=0.1598, over 4719.00 frames. ], tot_loss[loss=0.3491, simple_loss=0.3718, pruned_loss=0.1632, over 955479.59 frames. ], batch size: 59, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:00:55,783 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4582.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:01:07,660 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8856, 1.9421, 1.8419, 1.2900, 2.5141, 2.2631, 1.9657, 1.9289], device='cuda:6'), covar=tensor([0.0779, 0.0683, 0.0795, 0.1037, 0.0367, 0.0695, 0.0848, 0.1110], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0130, 0.0134, 0.0122, 0.0107, 0.0132, 0.0139, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:01:14,988 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3773, 1.4320, 0.8391, 2.1082, 2.4329, 1.7457, 1.6668, 2.0095], device='cuda:6'), covar=tensor([0.1647, 0.2227, 0.2475, 0.1259, 0.2209, 0.2139, 0.1468, 0.2052], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0094, 0.0113, 0.0090, 0.0122, 0.0092, 0.0096, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-25 23:01:25,592 INFO [finetune.py:976] (6/7) Epoch 1, batch 4600, loss[loss=0.3334, simple_loss=0.3565, pruned_loss=0.1552, over 4814.00 frames. ], tot_loss[loss=0.3445, simple_loss=0.3685, pruned_loss=0.1602, over 955715.63 frames. ], batch size: 40, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:01:36,973 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2859, 1.4851, 0.5984, 2.0790, 2.3666, 1.7060, 1.7151, 2.1004], device='cuda:6'), covar=tensor([0.1448, 0.2023, 0.2593, 0.1088, 0.2046, 0.2227, 0.1327, 0.1814], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0094, 0.0113, 0.0090, 0.0122, 0.0093, 0.0096, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-25 23:01:38,110 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.209e+02 2.082e+02 2.456e+02 3.064e+02 5.977e+02, threshold=4.911e+02, percent-clipped=1.0 2023-03-25 23:01:59,353 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4630.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:02:18,007 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=4644.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:02:28,983 INFO [finetune.py:976] (6/7) Epoch 1, batch 4650, loss[loss=0.3344, simple_loss=0.3502, pruned_loss=0.1594, over 4684.00 frames. ], tot_loss[loss=0.3371, simple_loss=0.3619, pruned_loss=0.1562, over 955029.22 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:02:53,640 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5434, 1.4308, 1.1894, 1.1504, 1.5932, 1.9125, 1.6276, 1.1381], device='cuda:6'), covar=tensor([0.0271, 0.0463, 0.0588, 0.0508, 0.0301, 0.0222, 0.0239, 0.0520], device='cuda:6'), in_proj_covar=tensor([0.0083, 0.0111, 0.0130, 0.0110, 0.0103, 0.0099, 0.0086, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.5007e-05, 8.8206e-05, 1.0559e-04, 8.7279e-05, 8.2014e-05, 7.4219e-05, 6.6881e-05, 8.5862e-05], device='cuda:6') 2023-03-25 23:03:06,339 INFO [finetune.py:976] (6/7) Epoch 1, batch 4700, loss[loss=0.2986, simple_loss=0.33, pruned_loss=0.1336, over 4928.00 frames. ], tot_loss[loss=0.3308, simple_loss=0.3562, pruned_loss=0.1527, over 957253.49 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:03:20,653 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.265e+02 1.870e+02 2.224e+02 2.796e+02 5.273e+02, threshold=4.448e+02, percent-clipped=2.0 2023-03-25 23:03:48,488 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6661, 1.4324, 1.6769, 1.5956, 2.2562, 1.6706, 1.4306, 1.3184], device='cuda:6'), covar=tensor([0.2778, 0.3083, 0.2196, 0.2341, 0.2584, 0.1769, 0.3809, 0.2222], device='cuda:6'), in_proj_covar=tensor([0.0209, 0.0194, 0.0180, 0.0167, 0.0214, 0.0165, 0.0193, 0.0170], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:03:59,442 INFO [finetune.py:976] (6/7) Epoch 1, batch 4750, loss[loss=0.2787, simple_loss=0.3088, pruned_loss=0.1243, over 4725.00 frames. ], tot_loss[loss=0.3278, simple_loss=0.3534, pruned_loss=0.1511, over 953929.45 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:04:09,673 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3433, 1.1407, 0.9176, 1.0302, 1.1077, 1.0586, 1.1166, 1.7759], device='cuda:6'), covar=tensor([4.1294, 4.2294, 3.6521, 5.7084, 3.3133, 2.4967, 4.2556, 1.3184], device='cuda:6'), in_proj_covar=tensor([0.0215, 0.0206, 0.0190, 0.0239, 0.0200, 0.0173, 0.0205, 0.0154], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 23:04:34,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2983, 2.0106, 2.7785, 1.7796, 2.3873, 2.5724, 2.2006, 2.8343], device='cuda:6'), covar=tensor([0.2169, 0.2263, 0.1936, 0.2478, 0.1143, 0.1907, 0.2366, 0.1212], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0199, 0.0197, 0.0187, 0.0169, 0.0213, 0.0206, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:04:36,377 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4797.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:04:39,213 INFO [finetune.py:976] (6/7) Epoch 1, batch 4800, loss[loss=0.323, simple_loss=0.355, pruned_loss=0.1455, over 4871.00 frames. ], tot_loss[loss=0.332, simple_loss=0.3574, pruned_loss=0.1533, over 954668.18 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:04:56,834 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.406e+02 2.096e+02 2.556e+02 3.186e+02 5.883e+02, threshold=5.111e+02, percent-clipped=4.0 2023-03-25 23:05:32,168 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4845.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:05:41,981 INFO [finetune.py:976] (6/7) Epoch 1, batch 4850, loss[loss=0.342, simple_loss=0.3657, pruned_loss=0.1592, over 4821.00 frames. ], tot_loss[loss=0.3335, simple_loss=0.36, pruned_loss=0.1535, over 953600.71 frames. ], batch size: 45, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:06:12,790 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9901, 1.2481, 0.9481, 1.7918, 2.1465, 1.4419, 1.4951, 1.8292], device='cuda:6'), covar=tensor([0.1634, 0.2354, 0.2381, 0.1267, 0.2291, 0.2282, 0.1538, 0.2016], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0096, 0.0114, 0.0091, 0.0124, 0.0094, 0.0098, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-25 23:06:28,851 INFO [finetune.py:976] (6/7) Epoch 1, batch 4900, loss[loss=0.3618, simple_loss=0.3898, pruned_loss=0.1669, over 4803.00 frames. ], tot_loss[loss=0.3348, simple_loss=0.3616, pruned_loss=0.154, over 953762.99 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:06:45,501 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.521e+02 2.056e+02 2.408e+02 2.893e+02 5.886e+02, threshold=4.817e+02, percent-clipped=2.0 2023-03-25 23:07:15,542 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=4944.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:07:19,811 INFO [finetune.py:976] (6/7) Epoch 1, batch 4950, loss[loss=0.3091, simple_loss=0.3467, pruned_loss=0.1358, over 4895.00 frames. ], tot_loss[loss=0.3337, simple_loss=0.3613, pruned_loss=0.1531, over 953728.08 frames. ], batch size: 35, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:08:11,781 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=4992.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:08:22,750 INFO [finetune.py:976] (6/7) Epoch 1, batch 5000, loss[loss=0.3239, simple_loss=0.3488, pruned_loss=0.1495, over 4774.00 frames. ], tot_loss[loss=0.3311, simple_loss=0.3593, pruned_loss=0.1515, over 954010.76 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:08:32,728 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 2.131e+02 2.461e+02 3.038e+02 5.796e+02, threshold=4.923e+02, percent-clipped=4.0 2023-03-25 23:09:00,010 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0640, 0.8385, 0.9360, 0.9839, 1.1618, 1.2147, 1.0298, 0.8973], device='cuda:6'), covar=tensor([0.0293, 0.0368, 0.0577, 0.0403, 0.0325, 0.0282, 0.0298, 0.0396], device='cuda:6'), in_proj_covar=tensor([0.0083, 0.0111, 0.0131, 0.0110, 0.0103, 0.0099, 0.0087, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.5120e-05, 8.8113e-05, 1.0613e-04, 8.7661e-05, 8.2125e-05, 7.3882e-05, 6.7326e-05, 8.5886e-05], device='cuda:6') 2023-03-25 23:09:20,249 INFO [finetune.py:976] (6/7) Epoch 1, batch 5050, loss[loss=0.2664, simple_loss=0.3049, pruned_loss=0.114, over 4757.00 frames. ], tot_loss[loss=0.3252, simple_loss=0.3541, pruned_loss=0.1482, over 955786.22 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:09:28,136 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7770, 1.6673, 1.2181, 1.6662, 1.5677, 1.4331, 1.4772, 2.3917], device='cuda:6'), covar=tensor([2.4671, 2.7406, 2.2850, 3.8118, 2.1124, 1.5874, 3.0049, 0.7577], device='cuda:6'), in_proj_covar=tensor([0.0218, 0.0208, 0.0191, 0.0242, 0.0203, 0.0174, 0.0207, 0.0155], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 23:09:33,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6647, 1.7408, 1.7037, 1.1200, 2.0807, 1.8296, 1.7079, 1.5865], device='cuda:6'), covar=tensor([0.0780, 0.0772, 0.0757, 0.1066, 0.0480, 0.0768, 0.0874, 0.1375], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0129, 0.0134, 0.0122, 0.0106, 0.0132, 0.0138, 0.0158], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:10:01,165 INFO [finetune.py:976] (6/7) Epoch 1, batch 5100, loss[loss=0.2822, simple_loss=0.308, pruned_loss=0.1281, over 4725.00 frames. ], tot_loss[loss=0.3187, simple_loss=0.348, pruned_loss=0.1447, over 952681.22 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:10:09,454 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.918e+02 2.379e+02 2.966e+02 8.444e+02, threshold=4.758e+02, percent-clipped=2.0 2023-03-25 23:10:23,538 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.14 vs. limit=5.0 2023-03-25 23:10:34,837 INFO [finetune.py:976] (6/7) Epoch 1, batch 5150, loss[loss=0.2891, simple_loss=0.3192, pruned_loss=0.1295, over 4687.00 frames. ], tot_loss[loss=0.3176, simple_loss=0.3469, pruned_loss=0.1441, over 952277.51 frames. ], batch size: 23, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:11:14,839 INFO [finetune.py:976] (6/7) Epoch 1, batch 5200, loss[loss=0.4194, simple_loss=0.4271, pruned_loss=0.2058, over 4902.00 frames. ], tot_loss[loss=0.3223, simple_loss=0.3519, pruned_loss=0.1464, over 953383.28 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:11:24,772 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.601e+02 2.261e+02 2.570e+02 3.078e+02 5.221e+02, threshold=5.140e+02, percent-clipped=2.0 2023-03-25 23:12:06,970 INFO [finetune.py:976] (6/7) Epoch 1, batch 5250, loss[loss=0.3097, simple_loss=0.3495, pruned_loss=0.1349, over 4924.00 frames. ], tot_loss[loss=0.3226, simple_loss=0.353, pruned_loss=0.1461, over 952991.47 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:12:55,136 INFO [finetune.py:976] (6/7) Epoch 1, batch 5300, loss[loss=0.3264, simple_loss=0.3781, pruned_loss=0.1374, over 4924.00 frames. ], tot_loss[loss=0.3255, simple_loss=0.3562, pruned_loss=0.1474, over 954086.87 frames. ], batch size: 42, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:13:08,326 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.321e+02 2.032e+02 2.465e+02 2.907e+02 4.480e+02, threshold=4.930e+02, percent-clipped=0.0 2023-03-25 23:13:37,243 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-25 23:13:43,836 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.73 vs. limit=5.0 2023-03-25 23:13:49,136 INFO [finetune.py:976] (6/7) Epoch 1, batch 5350, loss[loss=0.3487, simple_loss=0.3814, pruned_loss=0.158, over 4800.00 frames. ], tot_loss[loss=0.322, simple_loss=0.3541, pruned_loss=0.1449, over 952975.33 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:14:48,106 INFO [finetune.py:976] (6/7) Epoch 1, batch 5400, loss[loss=0.2689, simple_loss=0.3009, pruned_loss=0.1184, over 4769.00 frames. ], tot_loss[loss=0.3189, simple_loss=0.3505, pruned_loss=0.1437, over 954112.69 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:14:55,971 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.226e+02 1.939e+02 2.339e+02 2.729e+02 4.650e+02, threshold=4.678e+02, percent-clipped=0.0 2023-03-25 23:15:31,413 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2720, 1.8969, 1.7763, 0.7873, 1.8336, 1.8334, 1.4879, 1.9365], device='cuda:6'), covar=tensor([0.0947, 0.1086, 0.1765, 0.2596, 0.1569, 0.2414, 0.2374, 0.1262], device='cuda:6'), in_proj_covar=tensor([0.0158, 0.0176, 0.0189, 0.0173, 0.0196, 0.0195, 0.0199, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:15:39,351 INFO [finetune.py:976] (6/7) Epoch 1, batch 5450, loss[loss=0.2796, simple_loss=0.3244, pruned_loss=0.1174, over 4815.00 frames. ], tot_loss[loss=0.3143, simple_loss=0.346, pruned_loss=0.1413, over 955281.61 frames. ], batch size: 39, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:15:51,601 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9738, 1.7030, 1.4522, 1.5188, 1.6876, 1.4875, 1.5342, 2.5368], device='cuda:6'), covar=tensor([3.3029, 3.3176, 2.7564, 4.4058, 2.7085, 2.0984, 3.4805, 0.8938], device='cuda:6'), in_proj_covar=tensor([0.0222, 0.0212, 0.0194, 0.0247, 0.0207, 0.0177, 0.0210, 0.0157], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 23:16:31,590 INFO [finetune.py:976] (6/7) Epoch 1, batch 5500, loss[loss=0.3593, simple_loss=0.3793, pruned_loss=0.1696, over 4826.00 frames. ], tot_loss[loss=0.3107, simple_loss=0.3424, pruned_loss=0.1395, over 956064.16 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:16:45,963 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.302e+02 2.026e+02 2.277e+02 2.875e+02 1.009e+03, threshold=4.553e+02, percent-clipped=5.0 2023-03-25 23:16:58,309 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3221, 2.4638, 2.2342, 1.7375, 2.7678, 2.6114, 2.3974, 2.3114], device='cuda:6'), covar=tensor([0.0732, 0.0635, 0.0792, 0.1048, 0.0427, 0.0744, 0.0793, 0.0973], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0131, 0.0137, 0.0126, 0.0107, 0.0135, 0.0141, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:17:20,558 INFO [finetune.py:976] (6/7) Epoch 1, batch 5550, loss[loss=0.3055, simple_loss=0.3536, pruned_loss=0.1287, over 4874.00 frames. ], tot_loss[loss=0.3119, simple_loss=0.3428, pruned_loss=0.1405, over 952925.44 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:18:00,566 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3239, 3.7096, 3.9137, 4.2101, 4.0503, 3.8527, 4.4184, 1.4159], device='cuda:6'), covar=tensor([0.0628, 0.0721, 0.0664, 0.0719, 0.1026, 0.1186, 0.0600, 0.4591], device='cuda:6'), in_proj_covar=tensor([0.0367, 0.0243, 0.0268, 0.0294, 0.0345, 0.0288, 0.0310, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:18:01,092 INFO [finetune.py:976] (6/7) Epoch 1, batch 5600, loss[loss=0.3444, simple_loss=0.3706, pruned_loss=0.1591, over 4907.00 frames. ], tot_loss[loss=0.3147, simple_loss=0.3467, pruned_loss=0.1414, over 953286.86 frames. ], batch size: 36, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:18:19,453 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.468e+02 1.839e+02 2.287e+02 2.793e+02 4.099e+02, threshold=4.573e+02, percent-clipped=0.0 2023-03-25 23:18:21,401 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6197, 1.3769, 1.4659, 1.6625, 2.4370, 1.6090, 1.3787, 1.2994], device='cuda:6'), covar=tensor([0.3184, 0.3019, 0.2565, 0.2452, 0.2526, 0.1661, 0.3719, 0.2427], device='cuda:6'), in_proj_covar=tensor([0.0213, 0.0198, 0.0183, 0.0170, 0.0219, 0.0167, 0.0198, 0.0173], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:18:38,540 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3432, 2.8968, 2.7942, 1.3135, 3.0387, 2.2015, 0.7839, 1.8108], device='cuda:6'), covar=tensor([0.2223, 0.1846, 0.1826, 0.3347, 0.1203, 0.1106, 0.3954, 0.1621], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0158, 0.0161, 0.0125, 0.0151, 0.0115, 0.0143, 0.0118], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-25 23:18:42,133 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-25 23:18:59,408 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5650.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:18:59,878 INFO [finetune.py:976] (6/7) Epoch 1, batch 5650, loss[loss=0.3497, simple_loss=0.3842, pruned_loss=0.1576, over 4799.00 frames. ], tot_loss[loss=0.3175, simple_loss=0.3504, pruned_loss=0.1423, over 952472.68 frames. ], batch size: 41, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:19:35,523 INFO [finetune.py:976] (6/7) Epoch 1, batch 5700, loss[loss=0.2593, simple_loss=0.2869, pruned_loss=0.1159, over 4369.00 frames. ], tot_loss[loss=0.314, simple_loss=0.3454, pruned_loss=0.1413, over 933120.24 frames. ], batch size: 18, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:19:41,539 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5711.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 23:19:43,181 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.818e+02 2.245e+02 2.685e+02 4.321e+02, threshold=4.489e+02, percent-clipped=0.0 2023-03-25 23:19:43,855 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5744, 3.0755, 2.0825, 1.9407, 3.4299, 3.2033, 2.7284, 2.7914], device='cuda:6'), covar=tensor([0.0868, 0.0571, 0.1177, 0.1163, 0.0389, 0.0813, 0.0977, 0.0961], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0138, 0.0126, 0.0108, 0.0136, 0.0142, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:20:08,411 INFO [finetune.py:976] (6/7) Epoch 2, batch 0, loss[loss=0.3355, simple_loss=0.3679, pruned_loss=0.1516, over 4895.00 frames. ], tot_loss[loss=0.3355, simple_loss=0.3679, pruned_loss=0.1516, over 4895.00 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:20:08,412 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-25 23:20:25,000 INFO [finetune.py:1010] (6/7) Epoch 2, validation: loss=0.2224, simple_loss=0.2847, pruned_loss=0.08, over 2265189.00 frames. 2023-03-25 23:20:25,000 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-25 23:20:56,841 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5755.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:20:59,307 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0216, 2.1454, 1.7467, 1.4559, 2.3855, 2.1951, 2.0795, 1.8087], device='cuda:6'), covar=tensor([0.0777, 0.0593, 0.1039, 0.1136, 0.0394, 0.0902, 0.0827, 0.1106], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0138, 0.0126, 0.0107, 0.0136, 0.0142, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:21:20,519 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3161, 1.3997, 1.3887, 0.8865, 1.6755, 1.4259, 1.3769, 1.3426], device='cuda:6'), covar=tensor([0.0766, 0.0812, 0.0783, 0.1083, 0.0613, 0.0910, 0.0854, 0.1362], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0138, 0.0126, 0.0107, 0.0136, 0.0142, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:21:22,833 INFO [finetune.py:976] (6/7) Epoch 2, batch 50, loss[loss=0.3065, simple_loss=0.3413, pruned_loss=0.1358, over 4853.00 frames. ], tot_loss[loss=0.3166, simple_loss=0.3491, pruned_loss=0.142, over 217202.42 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:21:42,186 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7613, 1.1625, 0.9452, 1.5861, 2.0326, 1.0656, 1.2959, 1.7500], device='cuda:6'), covar=tensor([0.1482, 0.1949, 0.1994, 0.1135, 0.1988, 0.2099, 0.1346, 0.1674], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0096, 0.0115, 0.0092, 0.0124, 0.0095, 0.0098, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-25 23:21:50,516 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-25 23:21:54,357 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.870e+02 2.317e+02 2.912e+02 7.564e+02, threshold=4.633e+02, percent-clipped=3.0 2023-03-25 23:21:55,709 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5816.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:22:11,075 INFO [finetune.py:976] (6/7) Epoch 2, batch 100, loss[loss=0.2932, simple_loss=0.3348, pruned_loss=0.1257, over 4814.00 frames. ], tot_loss[loss=0.307, simple_loss=0.3397, pruned_loss=0.1372, over 381441.01 frames. ], batch size: 38, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:22:36,194 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5858.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:22:42,752 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.06 vs. limit=5.0 2023-03-25 23:22:49,652 INFO [finetune.py:976] (6/7) Epoch 2, batch 150, loss[loss=0.2454, simple_loss=0.2936, pruned_loss=0.09861, over 4910.00 frames. ], tot_loss[loss=0.3001, simple_loss=0.3338, pruned_loss=0.1332, over 510680.78 frames. ], batch size: 43, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:23:18,196 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.293e+02 1.883e+02 2.329e+02 2.858e+02 5.160e+02, threshold=4.657e+02, percent-clipped=2.0 2023-03-25 23:23:21,841 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=5919.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:23:25,379 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0917, 0.8205, 0.8700, 0.8939, 1.1405, 1.2502, 1.0834, 0.8972], device='cuda:6'), covar=tensor([0.0273, 0.0365, 0.0559, 0.0344, 0.0269, 0.0245, 0.0248, 0.0413], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0110, 0.0129, 0.0109, 0.0102, 0.0097, 0.0086, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.4034e-05, 8.6709e-05, 1.0411e-04, 8.6535e-05, 8.0969e-05, 7.2280e-05, 6.6363e-05, 8.3910e-05], device='cuda:6') 2023-03-25 23:23:28,110 INFO [finetune.py:976] (6/7) Epoch 2, batch 200, loss[loss=0.2823, simple_loss=0.33, pruned_loss=0.1173, over 4928.00 frames. ], tot_loss[loss=0.3027, simple_loss=0.335, pruned_loss=0.1351, over 611011.71 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:23:38,855 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3238, 1.1155, 1.0975, 0.9532, 1.4187, 1.5183, 1.2640, 1.0488], device='cuda:6'), covar=tensor([0.0337, 0.0433, 0.0649, 0.0462, 0.0321, 0.0325, 0.0394, 0.0481], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0110, 0.0129, 0.0109, 0.0102, 0.0097, 0.0086, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.4172e-05, 8.6866e-05, 1.0446e-04, 8.6724e-05, 8.1187e-05, 7.2466e-05, 6.6620e-05, 8.4222e-05], device='cuda:6') 2023-03-25 23:24:01,220 INFO [finetune.py:976] (6/7) Epoch 2, batch 250, loss[loss=0.2684, simple_loss=0.3052, pruned_loss=0.1158, over 4913.00 frames. ], tot_loss[loss=0.3067, simple_loss=0.3393, pruned_loss=0.1371, over 688074.46 frames. ], batch size: 32, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:24:09,530 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-25 23:24:20,422 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=5990.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:24:42,068 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6006.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:24:48,583 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.968e+02 2.365e+02 2.842e+02 7.361e+02, threshold=4.731e+02, percent-clipped=2.0 2023-03-25 23:25:01,949 INFO [finetune.py:976] (6/7) Epoch 2, batch 300, loss[loss=0.2886, simple_loss=0.3358, pruned_loss=0.1207, over 4868.00 frames. ], tot_loss[loss=0.3103, simple_loss=0.3438, pruned_loss=0.1384, over 748398.14 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:25:29,064 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6043.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:25:38,851 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6051.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:26:11,342 INFO [finetune.py:976] (6/7) Epoch 2, batch 350, loss[loss=0.3137, simple_loss=0.3601, pruned_loss=0.1337, over 4865.00 frames. ], tot_loss[loss=0.3105, simple_loss=0.3449, pruned_loss=0.138, over 795005.51 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 32.0 2023-03-25 23:26:26,286 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6399, 1.6596, 1.5001, 1.6628, 1.0612, 3.3398, 1.3300, 2.0031], device='cuda:6'), covar=tensor([0.3446, 0.2254, 0.1981, 0.2096, 0.1973, 0.0188, 0.2870, 0.1459], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0107, 0.0113, 0.0113, 0.0108, 0.0092, 0.0094, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:6') 2023-03-25 23:26:35,015 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6104.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:26:39,148 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6111.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:26:41,040 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4798, 1.8054, 1.0298, 1.8770, 1.7934, 1.2601, 2.7674, 1.5735], device='cuda:6'), covar=tensor([0.2284, 0.4188, 0.4907, 0.4596, 0.2944, 0.2169, 0.2778, 0.3296], device='cuda:6'), in_proj_covar=tensor([0.0158, 0.0187, 0.0228, 0.0241, 0.0201, 0.0173, 0.0188, 0.0181], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:26:41,480 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.275e+02 2.097e+02 2.536e+02 2.955e+02 5.135e+02, threshold=5.071e+02, percent-clipped=1.0 2023-03-25 23:26:57,926 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6354, 1.7349, 1.6886, 1.1887, 1.5443, 1.8983, 1.8678, 1.4946], device='cuda:6'), covar=tensor([0.1062, 0.0567, 0.0462, 0.0718, 0.0434, 0.0507, 0.0329, 0.0658], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0146, 0.0113, 0.0123, 0.0124, 0.0112, 0.0139, 0.0140], device='cuda:6'), out_proj_covar=tensor([9.1867e-05, 1.0888e-04, 8.2759e-05, 9.0664e-05, 8.9918e-05, 8.2850e-05, 1.0382e-04, 1.0406e-04], device='cuda:6') 2023-03-25 23:26:59,637 INFO [finetune.py:976] (6/7) Epoch 2, batch 400, loss[loss=0.3151, simple_loss=0.3407, pruned_loss=0.1448, over 4865.00 frames. ], tot_loss[loss=0.3103, simple_loss=0.3446, pruned_loss=0.138, over 830877.53 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:27:02,675 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-25 23:27:09,987 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6135.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:27:49,742 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6170.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:27:59,431 INFO [finetune.py:976] (6/7) Epoch 2, batch 450, loss[loss=0.2589, simple_loss=0.2996, pruned_loss=0.1091, over 4745.00 frames. ], tot_loss[loss=0.308, simple_loss=0.3427, pruned_loss=0.1366, over 857753.78 frames. ], batch size: 54, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:28:01,284 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6213, 1.4220, 2.0165, 3.2619, 2.4100, 2.4154, 0.8564, 2.5359], device='cuda:6'), covar=tensor([0.2101, 0.1817, 0.1622, 0.0615, 0.0908, 0.1499, 0.2282, 0.0767], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0119, 0.0137, 0.0160, 0.0104, 0.0145, 0.0129, 0.0108], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-25 23:28:14,490 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6196.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:18,022 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4702, 1.4679, 0.8158, 2.2589, 2.6994, 1.8553, 1.6806, 2.2312], device='cuda:6'), covar=tensor([0.1510, 0.2184, 0.2489, 0.1180, 0.1785, 0.1962, 0.1510, 0.1936], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0097, 0.0115, 0.0092, 0.0123, 0.0096, 0.0098, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-25 23:28:25,706 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6203.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:33,031 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6211.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:34,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4166, 3.8577, 3.9445, 4.2681, 4.1809, 3.9505, 4.5207, 1.5449], device='cuda:6'), covar=tensor([0.0793, 0.0830, 0.0853, 0.0950, 0.1260, 0.1376, 0.0685, 0.5005], device='cuda:6'), in_proj_covar=tensor([0.0369, 0.0246, 0.0269, 0.0298, 0.0348, 0.0290, 0.0311, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:28:34,848 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6214.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:28:35,343 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.932e+02 2.244e+02 2.718e+02 3.817e+02, threshold=4.487e+02, percent-clipped=0.0 2023-03-25 23:28:39,717 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6247, 1.6162, 1.4855, 1.6416, 0.9640, 3.2179, 1.1553, 1.8511], device='cuda:6'), covar=tensor([0.3460, 0.2439, 0.2187, 0.2246, 0.2172, 0.0232, 0.3108, 0.1487], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0107, 0.0113, 0.0114, 0.0108, 0.0092, 0.0095, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:6') 2023-03-25 23:28:45,040 INFO [finetune.py:976] (6/7) Epoch 2, batch 500, loss[loss=0.258, simple_loss=0.296, pruned_loss=0.11, over 4798.00 frames. ], tot_loss[loss=0.3035, simple_loss=0.3384, pruned_loss=0.1343, over 879994.18 frames. ], batch size: 51, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:28:52,704 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6231.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:25,578 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6264.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:26,789 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6266.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:35,131 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6272.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:29:38,741 INFO [finetune.py:976] (6/7) Epoch 2, batch 550, loss[loss=0.3135, simple_loss=0.3393, pruned_loss=0.1439, over 4782.00 frames. ], tot_loss[loss=0.2995, simple_loss=0.3342, pruned_loss=0.1324, over 897359.93 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:30:18,225 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6306.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:30:29,436 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.346e+02 1.947e+02 2.353e+02 2.718e+02 5.175e+02, threshold=4.705e+02, percent-clipped=1.0 2023-03-25 23:30:47,515 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6327.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:30:48,004 INFO [finetune.py:976] (6/7) Epoch 2, batch 600, loss[loss=0.3192, simple_loss=0.3544, pruned_loss=0.142, over 4813.00 frames. ], tot_loss[loss=0.299, simple_loss=0.3334, pruned_loss=0.1323, over 910174.13 frames. ], batch size: 51, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:30:50,728 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1647, 2.4922, 2.4507, 1.3557, 2.5902, 2.3500, 1.9190, 2.2403], device='cuda:6'), covar=tensor([0.0860, 0.1253, 0.2341, 0.3026, 0.1950, 0.2241, 0.2545, 0.1685], device='cuda:6'), in_proj_covar=tensor([0.0160, 0.0180, 0.0193, 0.0178, 0.0201, 0.0200, 0.0203, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:31:07,302 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6346.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:31:10,393 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6391, 1.7852, 1.5861, 1.7542, 0.8877, 3.8952, 1.3916, 1.9236], device='cuda:6'), covar=tensor([0.3676, 0.2389, 0.2155, 0.2198, 0.2338, 0.0145, 0.2925, 0.1657], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0107, 0.0113, 0.0114, 0.0108, 0.0092, 0.0095, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0004, 0.0004], device='cuda:6') 2023-03-25 23:31:13,140 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6354.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:31:28,046 INFO [finetune.py:976] (6/7) Epoch 2, batch 650, loss[loss=0.3404, simple_loss=0.3543, pruned_loss=0.1632, over 4789.00 frames. ], tot_loss[loss=0.3054, simple_loss=0.3396, pruned_loss=0.1356, over 920515.31 frames. ], batch size: 26, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:31:42,401 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6399.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:31:51,310 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6411.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:31:53,615 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.435e+02 2.011e+02 2.373e+02 2.999e+02 4.783e+02, threshold=4.746e+02, percent-clipped=1.0 2023-03-25 23:31:59,777 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7923, 1.6650, 2.1712, 2.9768, 2.2619, 2.3716, 1.2198, 2.3349], device='cuda:6'), covar=tensor([0.1629, 0.1438, 0.1152, 0.0623, 0.0763, 0.2130, 0.1625, 0.0734], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0119, 0.0137, 0.0160, 0.0104, 0.0144, 0.0129, 0.0108], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-25 23:32:01,500 INFO [finetune.py:976] (6/7) Epoch 2, batch 700, loss[loss=0.3234, simple_loss=0.3471, pruned_loss=0.1499, over 4863.00 frames. ], tot_loss[loss=0.3064, simple_loss=0.3406, pruned_loss=0.1361, over 927107.47 frames. ], batch size: 31, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:32:22,203 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6459.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:24,551 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6462.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:27,963 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6467.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:31,601 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6390, 2.0859, 2.0132, 1.0143, 2.1665, 1.9737, 1.6034, 2.0519], device='cuda:6'), covar=tensor([0.0625, 0.1284, 0.1722, 0.2631, 0.1452, 0.2084, 0.2310, 0.1308], device='cuda:6'), in_proj_covar=tensor([0.0161, 0.0181, 0.0194, 0.0178, 0.0202, 0.0201, 0.0204, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:32:34,494 INFO [finetune.py:976] (6/7) Epoch 2, batch 750, loss[loss=0.2593, simple_loss=0.3044, pruned_loss=0.1071, over 4783.00 frames. ], tot_loss[loss=0.3083, simple_loss=0.3427, pruned_loss=0.137, over 934630.08 frames. ], batch size: 25, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:32:42,501 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6491.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:44,301 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2471, 1.8763, 1.6176, 0.7891, 1.7222, 1.8364, 1.4750, 1.7623], device='cuda:6'), covar=tensor([0.0709, 0.1035, 0.1670, 0.2204, 0.1338, 0.1948, 0.2151, 0.1124], device='cuda:6'), in_proj_covar=tensor([0.0161, 0.0181, 0.0194, 0.0178, 0.0202, 0.0201, 0.0204, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:32:50,672 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2395, 1.8245, 2.7525, 3.9765, 3.0416, 2.7355, 0.9359, 3.2941], device='cuda:6'), covar=tensor([0.1911, 0.1604, 0.1374, 0.0593, 0.0818, 0.1433, 0.2160, 0.0616], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0120, 0.0139, 0.0162, 0.0105, 0.0145, 0.0130, 0.0109], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-25 23:32:58,318 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6514.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:32:58,813 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.464e+02 1.998e+02 2.267e+02 2.688e+02 5.596e+02, threshold=4.534e+02, percent-clipped=2.0 2023-03-25 23:33:04,278 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6523.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:33:06,035 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6526.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:07,184 INFO [finetune.py:976] (6/7) Epoch 2, batch 800, loss[loss=0.277, simple_loss=0.3065, pruned_loss=0.1237, over 4745.00 frames. ], tot_loss[loss=0.3066, simple_loss=0.3417, pruned_loss=0.1358, over 939604.25 frames. ], batch size: 59, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:33:07,375 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6528.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:40,474 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6559.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:40,886 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-25 23:33:42,362 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6562.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:51,413 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6567.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:33:52,911 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-25 23:33:58,060 INFO [finetune.py:976] (6/7) Epoch 2, batch 850, loss[loss=0.2499, simple_loss=0.3002, pruned_loss=0.09975, over 4912.00 frames. ], tot_loss[loss=0.3051, simple_loss=0.3396, pruned_loss=0.1353, over 941494.56 frames. ], batch size: 46, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:34:37,095 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.789e+02 2.218e+02 2.697e+02 5.451e+02, threshold=4.436e+02, percent-clipped=1.0 2023-03-25 23:34:42,632 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6622.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:34:46,290 INFO [finetune.py:976] (6/7) Epoch 2, batch 900, loss[loss=0.2807, simple_loss=0.321, pruned_loss=0.1203, over 4768.00 frames. ], tot_loss[loss=0.2995, simple_loss=0.3349, pruned_loss=0.132, over 942234.21 frames. ], batch size: 28, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:34:47,131 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6629.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:02,806 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6646.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:07,730 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0730, 0.7946, 0.8951, 0.8449, 1.1718, 1.2119, 1.0322, 0.9336], device='cuda:6'), covar=tensor([0.0294, 0.0388, 0.0569, 0.0376, 0.0284, 0.0307, 0.0282, 0.0368], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0111, 0.0130, 0.0111, 0.0102, 0.0098, 0.0087, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.4344e-05, 8.7789e-05, 1.0536e-04, 8.7810e-05, 8.1462e-05, 7.3021e-05, 6.7364e-05, 8.4612e-05], device='cuda:6') 2023-03-25 23:35:14,586 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2952, 1.3452, 1.4266, 0.7013, 1.3039, 1.5990, 1.5469, 1.3531], device='cuda:6'), covar=tensor([0.1149, 0.0610, 0.0520, 0.0818, 0.0422, 0.0600, 0.0399, 0.0649], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0147, 0.0113, 0.0124, 0.0124, 0.0113, 0.0140, 0.0140], device='cuda:6'), out_proj_covar=tensor([9.2489e-05, 1.1004e-04, 8.2510e-05, 9.1368e-05, 9.0236e-05, 8.3677e-05, 1.0461e-04, 1.0352e-04], device='cuda:6') 2023-03-25 23:35:25,033 INFO [finetune.py:976] (6/7) Epoch 2, batch 950, loss[loss=0.3862, simple_loss=0.4049, pruned_loss=0.1838, over 4864.00 frames. ], tot_loss[loss=0.2983, simple_loss=0.3337, pruned_loss=0.1315, over 945468.76 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:35:32,455 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6690.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:34,866 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6694.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:37,911 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6699.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:35:48,890 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.729e+02 2.077e+02 2.706e+02 4.933e+02, threshold=4.155e+02, percent-clipped=1.0 2023-03-25 23:36:03,624 INFO [finetune.py:976] (6/7) Epoch 2, batch 1000, loss[loss=0.3224, simple_loss=0.3825, pruned_loss=0.1311, over 4857.00 frames. ], tot_loss[loss=0.2999, simple_loss=0.3355, pruned_loss=0.1321, over 946142.39 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:36:28,551 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6747.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:01,237 INFO [finetune.py:976] (6/7) Epoch 2, batch 1050, loss[loss=0.3348, simple_loss=0.366, pruned_loss=0.1518, over 4331.00 frames. ], tot_loss[loss=0.3017, simple_loss=0.3386, pruned_loss=0.1324, over 949634.56 frames. ], batch size: 65, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:37:09,200 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6791.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:26,479 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-25 23:37:29,666 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.307e+02 2.050e+02 2.547e+02 2.958e+02 5.414e+02, threshold=5.095e+02, percent-clipped=8.0 2023-03-25 23:37:37,330 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6818.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:37:40,276 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5164, 3.2608, 3.2558, 1.5559, 3.4632, 2.5422, 0.8602, 2.2181], device='cuda:6'), covar=tensor([0.2244, 0.1805, 0.1575, 0.3234, 0.1130, 0.1021, 0.4187, 0.1543], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0162, 0.0164, 0.0127, 0.0154, 0.0118, 0.0146, 0.0120], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-25 23:37:40,278 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6823.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:47,812 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6826.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:37:48,960 INFO [finetune.py:976] (6/7) Epoch 2, batch 1100, loss[loss=0.3113, simple_loss=0.3577, pruned_loss=0.1324, over 4924.00 frames. ], tot_loss[loss=0.3016, simple_loss=0.3391, pruned_loss=0.132, over 951592.76 frames. ], batch size: 33, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:37:56,678 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6839.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:18,855 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6859.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:23,766 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6867.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:28,923 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6874.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:31,316 INFO [finetune.py:976] (6/7) Epoch 2, batch 1150, loss[loss=0.287, simple_loss=0.3381, pruned_loss=0.118, over 4863.00 frames. ], tot_loss[loss=0.302, simple_loss=0.3401, pruned_loss=0.132, over 950778.71 frames. ], batch size: 34, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:38:31,428 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6878.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:38:52,177 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6907.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:53,409 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=6909.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:38:56,970 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 2.014e+02 2.348e+02 2.865e+02 5.036e+02, threshold=4.696e+02, percent-clipped=0.0 2023-03-25 23:38:57,045 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6915.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:08,213 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=6922.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:17,350 INFO [finetune.py:976] (6/7) Epoch 2, batch 1200, loss[loss=0.3293, simple_loss=0.3544, pruned_loss=0.1521, over 4850.00 frames. ], tot_loss[loss=0.2994, simple_loss=0.3375, pruned_loss=0.1306, over 950176.88 frames. ], batch size: 44, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:39:31,050 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6939.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:39:46,431 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-25 23:39:49,901 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=6970.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:49,953 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=6970.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:39:51,030 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2248, 4.4488, 4.6393, 5.0468, 4.8469, 4.6284, 5.3510, 1.6430], device='cuda:6'), covar=tensor([0.0729, 0.0826, 0.0706, 0.0823, 0.1344, 0.1332, 0.0516, 0.5446], device='cuda:6'), in_proj_covar=tensor([0.0370, 0.0247, 0.0271, 0.0297, 0.0349, 0.0291, 0.0313, 0.0305], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:39:56,179 INFO [finetune.py:976] (6/7) Epoch 2, batch 1250, loss[loss=0.2185, simple_loss=0.2722, pruned_loss=0.08241, over 4826.00 frames. ], tot_loss[loss=0.2953, simple_loss=0.3335, pruned_loss=0.1286, over 950369.26 frames. ], batch size: 30, lr: 4.00e-03, grad_scale: 16.0 2023-03-25 23:40:01,109 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=6985.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:40:23,992 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.288e+02 1.918e+02 2.412e+02 2.798e+02 4.765e+02, threshold=4.825e+02, percent-clipped=1.0 2023-03-25 23:40:39,100 INFO [finetune.py:976] (6/7) Epoch 2, batch 1300, loss[loss=0.2769, simple_loss=0.2925, pruned_loss=0.1306, over 3997.00 frames. ], tot_loss[loss=0.2912, simple_loss=0.3292, pruned_loss=0.1266, over 950902.13 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:40:48,880 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-25 23:40:52,837 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7048.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:41:17,562 INFO [finetune.py:976] (6/7) Epoch 2, batch 1350, loss[loss=0.2831, simple_loss=0.3465, pruned_loss=0.1098, over 4909.00 frames. ], tot_loss[loss=0.29, simple_loss=0.3282, pruned_loss=0.1259, over 952412.90 frames. ], batch size: 43, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:41:31,509 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6573, 1.1610, 1.1615, 1.3026, 1.1301, 1.2241, 1.2651, 1.3634], device='cuda:6'), covar=tensor([3.3147, 6.0047, 4.1241, 4.9513, 5.5070, 3.4288, 6.8907, 3.6371], device='cuda:6'), in_proj_covar=tensor([0.0218, 0.0251, 0.0237, 0.0263, 0.0241, 0.0214, 0.0272, 0.0209], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 23:41:41,178 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7109.0, num_to_drop=1, layers_to_drop={3} 2023-03-25 23:41:47,303 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.456e+02 1.876e+02 2.208e+02 2.586e+02 5.614e+02, threshold=4.416e+02, percent-clipped=5.0 2023-03-25 23:41:49,203 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7118.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:41:52,280 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7123.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:41:55,444 INFO [finetune.py:976] (6/7) Epoch 2, batch 1400, loss[loss=0.393, simple_loss=0.416, pruned_loss=0.1851, over 4758.00 frames. ], tot_loss[loss=0.296, simple_loss=0.334, pruned_loss=0.129, over 951892.40 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:41:56,688 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5964, 1.4246, 1.2183, 1.2532, 1.6747, 1.8553, 1.5186, 1.1634], device='cuda:6'), covar=tensor([0.0324, 0.0394, 0.0568, 0.0404, 0.0278, 0.0300, 0.0362, 0.0444], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0111, 0.0131, 0.0112, 0.0103, 0.0098, 0.0088, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.4343e-05, 8.8083e-05, 1.0605e-04, 8.8700e-05, 8.1984e-05, 7.2943e-05, 6.7759e-05, 8.4834e-05], device='cuda:6') 2023-03-25 23:42:10,346 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.83 vs. limit=5.0 2023-03-25 23:42:13,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0494, 2.1662, 1.9452, 1.3985, 2.4377, 2.3271, 2.1222, 1.9493], device='cuda:6'), covar=tensor([0.0758, 0.0628, 0.0792, 0.1055, 0.0395, 0.0724, 0.0793, 0.1092], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0132, 0.0140, 0.0128, 0.0108, 0.0138, 0.0144, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:42:29,988 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6313, 1.4471, 2.2211, 3.4430, 2.3676, 2.4546, 1.3394, 2.5907], device='cuda:6'), covar=tensor([0.2253, 0.1862, 0.1482, 0.0575, 0.0930, 0.1531, 0.1938, 0.0833], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0119, 0.0138, 0.0161, 0.0104, 0.0145, 0.0130, 0.0108], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-25 23:42:31,264 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7163.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:42:33,029 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7166.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:42:36,037 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7171.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:42:40,188 INFO [finetune.py:976] (6/7) Epoch 2, batch 1450, loss[loss=0.2698, simple_loss=0.3218, pruned_loss=0.1089, over 4764.00 frames. ], tot_loss[loss=0.297, simple_loss=0.3351, pruned_loss=0.1294, over 950289.74 frames. ], batch size: 54, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:43:29,960 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.933e+02 2.198e+02 2.731e+02 4.077e+02, threshold=4.395e+02, percent-clipped=0.0 2023-03-25 23:43:38,223 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.29 vs. limit=5.0 2023-03-25 23:43:40,429 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7224.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:43:42,757 INFO [finetune.py:976] (6/7) Epoch 2, batch 1500, loss[loss=0.2954, simple_loss=0.3427, pruned_loss=0.124, over 4890.00 frames. ], tot_loss[loss=0.2969, simple_loss=0.3354, pruned_loss=0.1292, over 949230.10 frames. ], batch size: 43, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:43:51,987 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7234.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:44:36,528 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7265.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:44:36,643 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-25 23:44:47,926 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7275.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:44:49,674 INFO [finetune.py:976] (6/7) Epoch 2, batch 1550, loss[loss=0.2435, simple_loss=0.2981, pruned_loss=0.09444, over 4824.00 frames. ], tot_loss[loss=0.2972, simple_loss=0.3357, pruned_loss=0.1294, over 949731.42 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:44:57,964 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7285.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:45:31,202 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.380e+02 1.989e+02 2.300e+02 2.720e+02 4.709e+02, threshold=4.600e+02, percent-clipped=4.0 2023-03-25 23:45:39,141 INFO [finetune.py:976] (6/7) Epoch 2, batch 1600, loss[loss=0.2477, simple_loss=0.2938, pruned_loss=0.1008, over 4823.00 frames. ], tot_loss[loss=0.2934, simple_loss=0.3323, pruned_loss=0.1273, over 949817.13 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:45:41,153 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-25 23:45:42,236 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7333.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:45:44,191 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7336.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:46:35,383 INFO [finetune.py:976] (6/7) Epoch 2, batch 1650, loss[loss=0.3013, simple_loss=0.3335, pruned_loss=0.1345, over 4905.00 frames. ], tot_loss[loss=0.2872, simple_loss=0.3266, pruned_loss=0.124, over 949720.82 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:46:37,385 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6080, 1.3729, 1.4517, 1.6706, 2.0902, 1.5589, 1.2726, 1.2773], device='cuda:6'), covar=tensor([0.3200, 0.3040, 0.2537, 0.2324, 0.2575, 0.1748, 0.3714, 0.2454], device='cuda:6'), in_proj_covar=tensor([0.0219, 0.0201, 0.0188, 0.0173, 0.0223, 0.0168, 0.0204, 0.0177], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:46:59,570 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7404.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:47:01,914 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7407.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:47:07,160 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.344e+02 1.965e+02 2.306e+02 2.769e+02 7.650e+02, threshold=4.611e+02, percent-clipped=3.0 2023-03-25 23:47:15,138 INFO [finetune.py:976] (6/7) Epoch 2, batch 1700, loss[loss=0.2364, simple_loss=0.2847, pruned_loss=0.09398, over 4756.00 frames. ], tot_loss[loss=0.2846, simple_loss=0.324, pruned_loss=0.1226, over 951767.60 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:47:22,703 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-25 23:47:48,737 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7468.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:48:00,184 INFO [finetune.py:976] (6/7) Epoch 2, batch 1750, loss[loss=0.2701, simple_loss=0.3276, pruned_loss=0.1062, over 4811.00 frames. ], tot_loss[loss=0.2868, simple_loss=0.3261, pruned_loss=0.1237, over 951865.17 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:48:35,432 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8616, 1.6556, 1.3117, 1.9148, 2.2099, 1.5067, 2.4973, 1.7785], device='cuda:6'), covar=tensor([0.3469, 0.7768, 0.6928, 0.7463, 0.4156, 0.3416, 0.5712, 0.5042], device='cuda:6'), in_proj_covar=tensor([0.0160, 0.0191, 0.0234, 0.0246, 0.0208, 0.0178, 0.0196, 0.0185], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:48:36,608 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-25 23:48:46,572 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.280e+02 2.019e+02 2.351e+02 2.794e+02 5.482e+02, threshold=4.701e+02, percent-clipped=1.0 2023-03-25 23:48:53,200 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7519.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:48:54,436 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2100, 1.9445, 2.6017, 1.7367, 2.3266, 2.4143, 1.8732, 2.4873], device='cuda:6'), covar=tensor([0.1782, 0.2105, 0.1874, 0.2660, 0.1094, 0.2056, 0.2457, 0.1182], device='cuda:6'), in_proj_covar=tensor([0.0201, 0.0203, 0.0201, 0.0192, 0.0175, 0.0220, 0.0211, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:49:03,492 INFO [finetune.py:976] (6/7) Epoch 2, batch 1800, loss[loss=0.2683, simple_loss=0.3161, pruned_loss=0.1103, over 4895.00 frames. ], tot_loss[loss=0.2908, simple_loss=0.3309, pruned_loss=0.1253, over 954517.79 frames. ], batch size: 43, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:49:03,576 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6812, 1.3213, 2.0643, 3.2716, 2.2975, 2.4187, 0.7700, 2.7699], device='cuda:6'), covar=tensor([0.2085, 0.2335, 0.1584, 0.0817, 0.0994, 0.1504, 0.2511, 0.0813], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0138, 0.0160, 0.0104, 0.0145, 0.0130, 0.0107], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-25 23:49:12,370 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7534.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:49:44,260 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7565.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:49:45,985 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8529, 1.6268, 1.3472, 1.6310, 1.9389, 1.5077, 2.1116, 1.7469], device='cuda:6'), covar=tensor([0.2961, 0.5937, 0.6172, 0.6131, 0.4067, 0.3141, 0.6881, 0.4118], device='cuda:6'), in_proj_covar=tensor([0.0160, 0.0191, 0.0234, 0.0246, 0.0208, 0.0178, 0.0197, 0.0185], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:49:57,757 INFO [finetune.py:976] (6/7) Epoch 2, batch 1850, loss[loss=0.2954, simple_loss=0.3305, pruned_loss=0.1302, over 4763.00 frames. ], tot_loss[loss=0.2934, simple_loss=0.3329, pruned_loss=0.1269, over 954824.44 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:50:05,101 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7582.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:50:47,092 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7613.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:50:48,712 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.368e+02 2.140e+02 2.552e+02 3.133e+02 4.516e+02, threshold=5.105e+02, percent-clipped=0.0 2023-03-25 23:51:00,622 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7624.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:51:01,903 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.30 vs. limit=5.0 2023-03-25 23:51:02,925 INFO [finetune.py:976] (6/7) Epoch 2, batch 1900, loss[loss=0.2317, simple_loss=0.2823, pruned_loss=0.09054, over 4720.00 frames. ], tot_loss[loss=0.2928, simple_loss=0.3328, pruned_loss=0.1264, over 955410.08 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:51:09,874 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7631.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:51:41,823 INFO [finetune.py:976] (6/7) Epoch 2, batch 1950, loss[loss=0.2628, simple_loss=0.3052, pruned_loss=0.1101, over 4783.00 frames. ], tot_loss[loss=0.2909, simple_loss=0.3313, pruned_loss=0.1253, over 956904.52 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:51:42,577 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7495, 1.2802, 1.3895, 1.3883, 1.2193, 1.2822, 1.4669, 1.3946], device='cuda:6'), covar=tensor([2.1759, 4.3174, 3.0432, 3.5305, 3.8597, 2.7025, 4.7254, 2.5749], device='cuda:6'), in_proj_covar=tensor([0.0221, 0.0255, 0.0241, 0.0267, 0.0244, 0.0217, 0.0276, 0.0213], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-25 23:51:47,806 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7685.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:51:48,580 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-25 23:51:59,346 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7704.0, num_to_drop=1, layers_to_drop={2} 2023-03-25 23:52:07,435 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.355e+02 1.951e+02 2.306e+02 2.786e+02 5.176e+02, threshold=4.611e+02, percent-clipped=1.0 2023-03-25 23:52:17,328 INFO [finetune.py:976] (6/7) Epoch 2, batch 2000, loss[loss=0.297, simple_loss=0.3401, pruned_loss=0.1269, over 4822.00 frames. ], tot_loss[loss=0.2863, simple_loss=0.3266, pruned_loss=0.1229, over 957052.77 frames. ], batch size: 41, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:52:26,494 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7743.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:52:31,890 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7752.0, num_to_drop=1, layers_to_drop={1} 2023-03-25 23:52:39,951 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7763.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:52:49,967 INFO [finetune.py:976] (6/7) Epoch 2, batch 2050, loss[loss=0.251, simple_loss=0.2884, pruned_loss=0.1067, over 4702.00 frames. ], tot_loss[loss=0.2832, simple_loss=0.3231, pruned_loss=0.1216, over 956980.39 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:53:05,256 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=7802.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:53:06,507 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7804.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:53:14,627 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.310e+02 1.893e+02 2.255e+02 2.795e+02 6.508e+02, threshold=4.510e+02, percent-clipped=3.0 2023-03-25 23:53:17,156 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7819.0, num_to_drop=1, layers_to_drop={0} 2023-03-25 23:53:26,209 INFO [finetune.py:976] (6/7) Epoch 2, batch 2100, loss[loss=0.3585, simple_loss=0.3814, pruned_loss=0.1679, over 4829.00 frames. ], tot_loss[loss=0.2817, simple_loss=0.3222, pruned_loss=0.1207, over 958445.57 frames. ], batch size: 39, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:53:27,591 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5605, 1.5028, 1.3109, 1.3589, 1.7373, 1.8245, 1.4594, 1.2367], device='cuda:6'), covar=tensor([0.0291, 0.0366, 0.0544, 0.0363, 0.0241, 0.0279, 0.0474, 0.0444], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0112, 0.0132, 0.0113, 0.0103, 0.0098, 0.0088, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.4113e-05, 8.8706e-05, 1.0698e-04, 8.9384e-05, 8.2097e-05, 7.3108e-05, 6.8414e-05, 8.5507e-05], device='cuda:6') 2023-03-25 23:53:45,201 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6615, 3.6656, 3.5659, 1.7447, 3.8361, 2.8598, 0.8293, 2.4513], device='cuda:6'), covar=tensor([0.2881, 0.1741, 0.1536, 0.3338, 0.0980, 0.0926, 0.4448, 0.1550], device='cuda:6'), in_proj_covar=tensor([0.0157, 0.0165, 0.0166, 0.0129, 0.0155, 0.0119, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-25 23:53:56,789 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=7863.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:54:00,169 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7867.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:54:08,357 INFO [finetune.py:976] (6/7) Epoch 2, batch 2150, loss[loss=0.2806, simple_loss=0.3287, pruned_loss=0.1162, over 4822.00 frames. ], tot_loss[loss=0.286, simple_loss=0.3267, pruned_loss=0.1227, over 959938.40 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:54:49,703 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.983e+02 2.392e+02 2.856e+02 4.131e+02, threshold=4.785e+02, percent-clipped=0.0 2023-03-25 23:55:10,157 INFO [finetune.py:976] (6/7) Epoch 2, batch 2200, loss[loss=0.324, simple_loss=0.344, pruned_loss=0.152, over 4146.00 frames. ], tot_loss[loss=0.2888, simple_loss=0.3298, pruned_loss=0.1239, over 956938.21 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:55:16,958 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=7931.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:55:27,334 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-25 23:55:30,808 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-25 23:55:51,380 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7158, 1.4856, 1.5580, 1.6099, 2.2420, 1.7491, 1.5060, 1.3964], device='cuda:6'), covar=tensor([0.2762, 0.2891, 0.2236, 0.2209, 0.2509, 0.1585, 0.3297, 0.2175], device='cuda:6'), in_proj_covar=tensor([0.0222, 0.0205, 0.0191, 0.0176, 0.0226, 0.0171, 0.0207, 0.0180], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:56:02,925 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7572, 1.6656, 1.2633, 1.8807, 2.0765, 1.4282, 2.4473, 1.6609], device='cuda:6'), covar=tensor([0.3228, 0.7411, 0.6715, 0.6621, 0.3813, 0.3158, 0.5649, 0.4696], device='cuda:6'), in_proj_covar=tensor([0.0161, 0.0192, 0.0235, 0.0247, 0.0209, 0.0179, 0.0198, 0.0186], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:56:12,549 INFO [finetune.py:976] (6/7) Epoch 2, batch 2250, loss[loss=0.26, simple_loss=0.3095, pruned_loss=0.1052, over 4765.00 frames. ], tot_loss[loss=0.2896, simple_loss=0.3311, pruned_loss=0.1241, over 955782.72 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:56:13,680 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=7979.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:56:19,274 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=7980.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:56:19,916 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0615, 0.7185, 0.9295, 0.1930, 0.5837, 1.0410, 1.0919, 1.0119], device='cuda:6'), covar=tensor([0.1134, 0.0793, 0.0508, 0.0910, 0.0613, 0.0679, 0.0401, 0.0579], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0150, 0.0114, 0.0127, 0.0127, 0.0114, 0.0141, 0.0140], device='cuda:6'), out_proj_covar=tensor([9.4149e-05, 1.1177e-04, 8.3494e-05, 9.3489e-05, 9.1942e-05, 8.4397e-05, 1.0536e-04, 1.0357e-04], device='cuda:6') 2023-03-25 23:57:05,133 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.321e+02 1.899e+02 2.269e+02 2.653e+02 4.132e+02, threshold=4.538e+02, percent-clipped=0.0 2023-03-25 23:57:24,776 INFO [finetune.py:976] (6/7) Epoch 2, batch 2300, loss[loss=0.2589, simple_loss=0.2886, pruned_loss=0.1146, over 4084.00 frames. ], tot_loss[loss=0.2882, simple_loss=0.3303, pruned_loss=0.123, over 955568.17 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:57:53,440 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8063.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:58:05,174 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5927, 1.4422, 1.2795, 1.4884, 1.7987, 1.3713, 2.0398, 1.4961], device='cuda:6'), covar=tensor([0.3160, 0.6280, 0.6343, 0.6376, 0.4157, 0.3196, 0.5109, 0.4467], device='cuda:6'), in_proj_covar=tensor([0.0161, 0.0193, 0.0235, 0.0248, 0.0209, 0.0179, 0.0199, 0.0186], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:58:05,650 INFO [finetune.py:976] (6/7) Epoch 2, batch 2350, loss[loss=0.2727, simple_loss=0.3155, pruned_loss=0.115, over 4889.00 frames. ], tot_loss[loss=0.2855, simple_loss=0.3274, pruned_loss=0.1218, over 955092.43 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 16.0 2023-03-25 23:58:17,058 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3250, 1.2177, 1.6459, 1.1419, 1.2313, 1.4312, 1.3065, 1.5883], device='cuda:6'), covar=tensor([0.1277, 0.2123, 0.1010, 0.1231, 0.1009, 0.1186, 0.2521, 0.0860], device='cuda:6'), in_proj_covar=tensor([0.0202, 0.0204, 0.0201, 0.0193, 0.0175, 0.0221, 0.0211, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-25 23:58:20,621 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8099.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:58:39,058 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8111.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:58:41,410 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.369e+02 1.949e+02 2.211e+02 2.649e+02 5.737e+02, threshold=4.422e+02, percent-clipped=2.0 2023-03-25 23:59:01,109 INFO [finetune.py:976] (6/7) Epoch 2, batch 2400, loss[loss=0.2571, simple_loss=0.3101, pruned_loss=0.102, over 4935.00 frames. ], tot_loss[loss=0.2805, simple_loss=0.3231, pruned_loss=0.1189, over 957059.64 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 32.0 2023-03-25 23:59:15,035 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.00 vs. limit=2.0 2023-03-25 23:59:23,797 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-25 23:59:27,553 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8158.0, num_to_drop=0, layers_to_drop=set() 2023-03-25 23:59:39,734 INFO [finetune.py:976] (6/7) Epoch 2, batch 2450, loss[loss=0.273, simple_loss=0.324, pruned_loss=0.111, over 4852.00 frames. ], tot_loss[loss=0.2774, simple_loss=0.3196, pruned_loss=0.1177, over 956708.59 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:00:01,996 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 00:00:20,953 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.358e+02 1.915e+02 2.165e+02 2.652e+02 4.234e+02, threshold=4.330e+02, percent-clipped=0.0 2023-03-26 00:00:34,016 INFO [finetune.py:976] (6/7) Epoch 2, batch 2500, loss[loss=0.286, simple_loss=0.3353, pruned_loss=0.1184, over 4791.00 frames. ], tot_loss[loss=0.2799, simple_loss=0.3218, pruned_loss=0.119, over 955509.77 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:01:11,711 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:01:29,597 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7042, 1.5451, 2.2492, 3.3620, 2.3634, 2.3850, 0.9058, 2.7103], device='cuda:6'), covar=tensor([0.2083, 0.1812, 0.1432, 0.0640, 0.0922, 0.1320, 0.2378, 0.0709], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0120, 0.0139, 0.0162, 0.0105, 0.0146, 0.0132, 0.0108], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 00:01:30,124 INFO [finetune.py:976] (6/7) Epoch 2, batch 2550, loss[loss=0.2464, simple_loss=0.2998, pruned_loss=0.09654, over 4796.00 frames. ], tot_loss[loss=0.2813, simple_loss=0.3236, pruned_loss=0.1195, over 955867.66 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:01:31,471 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8280.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:01:51,591 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 00:02:02,894 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.611e+02 2.134e+02 2.551e+02 3.097e+02 6.135e+02, threshold=5.102e+02, percent-clipped=4.0 2023-03-26 00:02:09,801 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:02:10,867 INFO [finetune.py:976] (6/7) Epoch 2, batch 2600, loss[loss=0.2546, simple_loss=0.3026, pruned_loss=0.1033, over 4744.00 frames. ], tot_loss[loss=0.2821, simple_loss=0.325, pruned_loss=0.1196, over 957749.50 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:02:10,928 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8328.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:03:09,433 INFO [finetune.py:976] (6/7) Epoch 2, batch 2650, loss[loss=0.3006, simple_loss=0.3415, pruned_loss=0.1299, over 4782.00 frames. ], tot_loss[loss=0.2835, simple_loss=0.327, pruned_loss=0.12, over 956905.38 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:03:28,525 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:03:45,676 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.975e+02 2.270e+02 2.796e+02 5.066e+02, threshold=4.539e+02, percent-clipped=0.0 2023-03-26 00:03:57,447 INFO [finetune.py:976] (6/7) Epoch 2, batch 2700, loss[loss=0.3038, simple_loss=0.3447, pruned_loss=0.1315, over 4816.00 frames. ], tot_loss[loss=0.2836, simple_loss=0.3268, pruned_loss=0.1201, over 955803.61 frames. ], batch size: 40, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:04:21,450 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:04:28,892 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:04:47,919 INFO [finetune.py:976] (6/7) Epoch 2, batch 2750, loss[loss=0.2465, simple_loss=0.2843, pruned_loss=0.1044, over 4767.00 frames. ], tot_loss[loss=0.2786, simple_loss=0.3221, pruned_loss=0.1175, over 955769.18 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:05:15,011 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:05:25,224 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.975e+02 2.311e+02 2.901e+02 5.278e+02, threshold=4.621e+02, percent-clipped=1.0 2023-03-26 00:05:38,378 INFO [finetune.py:976] (6/7) Epoch 2, batch 2800, loss[loss=0.2536, simple_loss=0.3035, pruned_loss=0.1018, over 4894.00 frames. ], tot_loss[loss=0.2737, simple_loss=0.3175, pruned_loss=0.115, over 958161.60 frames. ], batch size: 35, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:06:44,210 INFO [finetune.py:976] (6/7) Epoch 2, batch 2850, loss[loss=0.277, simple_loss=0.3283, pruned_loss=0.1129, over 4822.00 frames. ], tot_loss[loss=0.2705, simple_loss=0.3145, pruned_loss=0.1133, over 957565.63 frames. ], batch size: 40, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:07:04,742 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3976, 1.4473, 1.4147, 0.8690, 1.6639, 1.5007, 1.4369, 1.4201], device='cuda:6'), covar=tensor([0.0699, 0.0805, 0.0776, 0.1040, 0.0613, 0.0786, 0.0785, 0.1197], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0131, 0.0141, 0.0128, 0.0107, 0.0138, 0.0145, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:07:08,271 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0577, 1.0539, 1.0833, 0.3701, 0.8515, 1.1545, 1.1956, 1.1001], device='cuda:6'), covar=tensor([0.0929, 0.0573, 0.0472, 0.0692, 0.0499, 0.0574, 0.0449, 0.0610], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0149, 0.0113, 0.0127, 0.0125, 0.0113, 0.0140, 0.0139], device='cuda:6'), out_proj_covar=tensor([9.3448e-05, 1.1129e-04, 8.3201e-05, 9.3419e-05, 9.0962e-05, 8.3679e-05, 1.0483e-04, 1.0298e-04], device='cuda:6') 2023-03-26 00:07:08,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.882e+02 2.289e+02 2.777e+02 4.779e+02, threshold=4.578e+02, percent-clipped=1.0 2023-03-26 00:07:13,828 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=8621.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:07:18,067 INFO [finetune.py:976] (6/7) Epoch 2, batch 2900, loss[loss=0.3485, simple_loss=0.3728, pruned_loss=0.1621, over 4837.00 frames. ], tot_loss[loss=0.276, simple_loss=0.3196, pruned_loss=0.1162, over 958043.24 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:07:39,072 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2415, 4.6285, 4.7648, 5.1047, 4.9639, 4.7120, 5.3625, 1.6359], device='cuda:6'), covar=tensor([0.0653, 0.0772, 0.0669, 0.0805, 0.1084, 0.1281, 0.0516, 0.5014], device='cuda:6'), in_proj_covar=tensor([0.0373, 0.0248, 0.0276, 0.0301, 0.0349, 0.0291, 0.0317, 0.0307], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:07:51,242 INFO [finetune.py:976] (6/7) Epoch 2, batch 2950, loss[loss=0.2602, simple_loss=0.2936, pruned_loss=0.1134, over 4726.00 frames. ], tot_loss[loss=0.2799, simple_loss=0.3235, pruned_loss=0.1181, over 958333.60 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:08:12,187 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5022, 2.4412, 2.1388, 2.5840, 1.9986, 4.9003, 2.4059, 3.0010], device='cuda:6'), covar=tensor([0.2692, 0.1958, 0.1752, 0.1771, 0.1489, 0.0086, 0.2053, 0.1132], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0108, 0.0114, 0.0115, 0.0112, 0.0094, 0.0098, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 00:08:17,805 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.871e+02 2.334e+02 2.881e+02 5.890e+02, threshold=4.669e+02, percent-clipped=2.0 2023-03-26 00:08:27,821 INFO [finetune.py:976] (6/7) Epoch 2, batch 3000, loss[loss=0.3082, simple_loss=0.3532, pruned_loss=0.1317, over 4815.00 frames. ], tot_loss[loss=0.2821, simple_loss=0.3254, pruned_loss=0.1194, over 958448.47 frames. ], batch size: 45, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:08:27,822 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 00:08:29,689 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8686, 1.2542, 0.9924, 1.6432, 2.1241, 1.1766, 1.4677, 1.6715], device='cuda:6'), covar=tensor([0.1503, 0.2076, 0.1957, 0.1181, 0.1978, 0.2043, 0.1251, 0.2012], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0097, 0.0115, 0.0092, 0.0124, 0.0096, 0.0098, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 00:08:33,036 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4532, 1.2424, 1.2468, 1.1975, 1.5579, 1.5957, 1.3691, 1.1321], device='cuda:6'), covar=tensor([0.0316, 0.0429, 0.0655, 0.0441, 0.0338, 0.0403, 0.0323, 0.0503], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0113, 0.0133, 0.0113, 0.0104, 0.0098, 0.0088, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.4524e-05, 8.9565e-05, 1.0715e-04, 8.9722e-05, 8.2440e-05, 7.2920e-05, 6.7938e-05, 8.5285e-05], device='cuda:6') 2023-03-26 00:08:43,567 INFO [finetune.py:1010] (6/7) Epoch 2, validation: loss=0.1956, simple_loss=0.2636, pruned_loss=0.06384, over 2265189.00 frames. 2023-03-26 00:08:43,568 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 00:09:09,177 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:09:26,207 INFO [finetune.py:976] (6/7) Epoch 2, batch 3050, loss[loss=0.2675, simple_loss=0.321, pruned_loss=0.107, over 4824.00 frames. ], tot_loss[loss=0.2825, simple_loss=0.3262, pruned_loss=0.1194, over 957851.12 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:10:07,392 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.935e+02 2.296e+02 2.584e+02 4.666e+02, threshold=4.592e+02, percent-clipped=0.0 2023-03-26 00:10:11,735 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:10:16,914 INFO [finetune.py:976] (6/7) Epoch 2, batch 3100, loss[loss=0.2535, simple_loss=0.2964, pruned_loss=0.1053, over 4734.00 frames. ], tot_loss[loss=0.2789, simple_loss=0.3231, pruned_loss=0.1174, over 959005.77 frames. ], batch size: 54, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:10:44,405 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1198, 2.6438, 2.5401, 1.3284, 2.7329, 2.1412, 0.9775, 1.8269], device='cuda:6'), covar=tensor([0.3040, 0.2005, 0.1758, 0.2876, 0.1338, 0.1006, 0.3371, 0.1464], device='cuda:6'), in_proj_covar=tensor([0.0158, 0.0167, 0.0167, 0.0130, 0.0157, 0.0121, 0.0149, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 00:10:56,831 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5318, 1.2586, 1.2801, 1.2028, 1.6358, 1.3093, 1.5764, 1.4182], device='cuda:6'), covar=tensor([0.2732, 0.5606, 0.5930, 0.5245, 0.3953, 0.3042, 0.4431, 0.4177], device='cuda:6'), in_proj_covar=tensor([0.0162, 0.0194, 0.0237, 0.0251, 0.0212, 0.0181, 0.0202, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:10:59,133 INFO [finetune.py:976] (6/7) Epoch 2, batch 3150, loss[loss=0.2169, simple_loss=0.2712, pruned_loss=0.08134, over 4874.00 frames. ], tot_loss[loss=0.2766, simple_loss=0.32, pruned_loss=0.1166, over 958451.69 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:10:59,280 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7511, 1.6108, 1.2684, 1.8847, 1.8715, 1.4869, 2.2987, 1.6715], device='cuda:6'), covar=tensor([0.2771, 0.6082, 0.6304, 0.5585, 0.3929, 0.3002, 0.4852, 0.3949], device='cuda:6'), in_proj_covar=tensor([0.0162, 0.0194, 0.0237, 0.0251, 0.0212, 0.0181, 0.0202, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:11:28,136 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.838e+02 2.165e+02 2.835e+02 5.909e+02, threshold=4.329e+02, percent-clipped=1.0 2023-03-26 00:11:33,411 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=8921.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:11:43,256 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8927.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:11:43,730 INFO [finetune.py:976] (6/7) Epoch 2, batch 3200, loss[loss=0.3179, simple_loss=0.3467, pruned_loss=0.1445, over 4853.00 frames. ], tot_loss[loss=0.2724, simple_loss=0.3158, pruned_loss=0.1145, over 958324.69 frames. ], batch size: 47, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:12:25,339 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=8969.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:12:30,758 INFO [finetune.py:976] (6/7) Epoch 2, batch 3250, loss[loss=0.3116, simple_loss=0.3553, pruned_loss=0.1339, over 4899.00 frames. ], tot_loss[loss=0.2736, simple_loss=0.3165, pruned_loss=0.1153, over 955009.37 frames. ], batch size: 37, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:12:36,284 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=8986.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:12:37,518 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=8988.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:02,650 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9013.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:03,774 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.184e+02 1.894e+02 2.281e+02 2.922e+02 4.541e+02, threshold=4.561e+02, percent-clipped=3.0 2023-03-26 00:13:07,024 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4209, 1.3200, 1.1868, 1.2292, 1.5703, 1.6351, 1.3779, 1.0883], device='cuda:6'), covar=tensor([0.0302, 0.0336, 0.0561, 0.0364, 0.0291, 0.0332, 0.0356, 0.0410], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0113, 0.0132, 0.0112, 0.0103, 0.0097, 0.0087, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.4531e-05, 8.9095e-05, 1.0712e-04, 8.9254e-05, 8.1914e-05, 7.2311e-05, 6.7471e-05, 8.4542e-05], device='cuda:6') 2023-03-26 00:13:11,706 INFO [finetune.py:976] (6/7) Epoch 2, batch 3300, loss[loss=0.307, simple_loss=0.361, pruned_loss=0.1265, over 4914.00 frames. ], tot_loss[loss=0.2783, simple_loss=0.3214, pruned_loss=0.1176, over 955040.99 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:13:18,621 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5616, 1.6207, 1.6820, 0.9166, 1.6368, 1.9320, 1.8898, 1.5379], device='cuda:6'), covar=tensor([0.1031, 0.0549, 0.0355, 0.0707, 0.0335, 0.0500, 0.0293, 0.0547], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0151, 0.0115, 0.0129, 0.0127, 0.0115, 0.0141, 0.0140], device='cuda:6'), out_proj_covar=tensor([9.4756e-05, 1.1240e-04, 8.4049e-05, 9.4593e-05, 9.1848e-05, 8.4681e-05, 1.0564e-04, 1.0366e-04], device='cuda:6') 2023-03-26 00:13:25,341 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9047.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:32,480 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.13 vs. limit=5.0 2023-03-26 00:13:42,689 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9074.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:13:44,937 INFO [finetune.py:976] (6/7) Epoch 2, batch 3350, loss[loss=0.2592, simple_loss=0.3098, pruned_loss=0.1043, over 4792.00 frames. ], tot_loss[loss=0.2805, simple_loss=0.3238, pruned_loss=0.1186, over 952687.17 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:13:49,916 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-26 00:14:20,188 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.430e+02 1.942e+02 2.307e+02 2.996e+02 6.023e+02, threshold=4.614e+02, percent-clipped=2.0 2023-03-26 00:14:20,878 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:14:21,309 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.78 vs. limit=5.0 2023-03-26 00:14:26,973 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8848, 1.1822, 0.9430, 1.4913, 2.0418, 0.9807, 1.4189, 1.6009], device='cuda:6'), covar=tensor([0.1540, 0.2290, 0.2225, 0.1324, 0.2117, 0.2205, 0.1415, 0.2124], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0098, 0.0117, 0.0093, 0.0125, 0.0097, 0.0099, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 00:14:28,094 INFO [finetune.py:976] (6/7) Epoch 2, batch 3400, loss[loss=0.2784, simple_loss=0.3129, pruned_loss=0.122, over 4857.00 frames. ], tot_loss[loss=0.2813, simple_loss=0.3249, pruned_loss=0.1189, over 954167.62 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:15:23,465 INFO [finetune.py:976] (6/7) Epoch 2, batch 3450, loss[loss=0.3059, simple_loss=0.3224, pruned_loss=0.1448, over 4926.00 frames. ], tot_loss[loss=0.2793, simple_loss=0.3227, pruned_loss=0.1179, over 954781.17 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:15:59,478 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.266e+02 1.993e+02 2.349e+02 2.850e+02 4.291e+02, threshold=4.698e+02, percent-clipped=0.0 2023-03-26 00:16:12,592 INFO [finetune.py:976] (6/7) Epoch 2, batch 3500, loss[loss=0.3368, simple_loss=0.3533, pruned_loss=0.1602, over 4915.00 frames. ], tot_loss[loss=0.2757, simple_loss=0.319, pruned_loss=0.1162, over 956235.23 frames. ], batch size: 37, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:16:38,475 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0761, 0.8212, 0.9221, 0.9009, 1.1995, 1.1990, 1.0271, 0.9416], device='cuda:6'), covar=tensor([0.0254, 0.0336, 0.0558, 0.0325, 0.0276, 0.0314, 0.0269, 0.0345], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0113, 0.0133, 0.0113, 0.0103, 0.0097, 0.0087, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.4411e-05, 8.9103e-05, 1.0787e-04, 8.9603e-05, 8.2232e-05, 7.2723e-05, 6.7307e-05, 8.4673e-05], device='cuda:6') 2023-03-26 00:17:13,576 INFO [finetune.py:976] (6/7) Epoch 2, batch 3550, loss[loss=0.2597, simple_loss=0.3055, pruned_loss=0.107, over 4706.00 frames. ], tot_loss[loss=0.2709, simple_loss=0.3144, pruned_loss=0.1137, over 955964.59 frames. ], batch size: 59, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:17:16,742 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9283.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:17:53,395 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.769e+02 2.188e+02 2.766e+02 5.069e+02, threshold=4.376e+02, percent-clipped=2.0 2023-03-26 00:18:09,360 INFO [finetune.py:976] (6/7) Epoch 2, batch 3600, loss[loss=0.3167, simple_loss=0.352, pruned_loss=0.1407, over 4839.00 frames. ], tot_loss[loss=0.2675, simple_loss=0.3114, pruned_loss=0.1118, over 957478.92 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:18:22,992 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9342.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:18:45,802 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9369.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:18:51,739 INFO [finetune.py:976] (6/7) Epoch 2, batch 3650, loss[loss=0.2634, simple_loss=0.3031, pruned_loss=0.1119, over 4721.00 frames. ], tot_loss[loss=0.2703, simple_loss=0.3144, pruned_loss=0.1131, over 955901.29 frames. ], batch size: 23, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:19:24,510 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9414.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:19:24,987 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.270e+02 1.902e+02 2.269e+02 2.850e+02 5.426e+02, threshold=4.539e+02, percent-clipped=4.0 2023-03-26 00:19:31,907 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9416.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:19:45,284 INFO [finetune.py:976] (6/7) Epoch 2, batch 3700, loss[loss=0.2537, simple_loss=0.2821, pruned_loss=0.1126, over 3908.00 frames. ], tot_loss[loss=0.2742, simple_loss=0.3195, pruned_loss=0.1144, over 955799.98 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:20:15,847 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9464.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:20:23,947 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9475.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:20:26,041 INFO [finetune.py:976] (6/7) Epoch 2, batch 3750, loss[loss=0.2464, simple_loss=0.276, pruned_loss=0.1084, over 4014.00 frames. ], tot_loss[loss=0.2748, simple_loss=0.3206, pruned_loss=0.1145, over 956535.19 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:20:55,390 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.357e+02 1.893e+02 2.405e+02 2.686e+02 6.929e+02, threshold=4.810e+02, percent-clipped=1.0 2023-03-26 00:21:10,673 INFO [finetune.py:976] (6/7) Epoch 2, batch 3800, loss[loss=0.2892, simple_loss=0.3099, pruned_loss=0.1343, over 4030.00 frames. ], tot_loss[loss=0.2793, simple_loss=0.3244, pruned_loss=0.1171, over 954029.73 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:22:03,070 INFO [finetune.py:976] (6/7) Epoch 2, batch 3850, loss[loss=0.2515, simple_loss=0.3011, pruned_loss=0.1009, over 4722.00 frames. ], tot_loss[loss=0.2762, simple_loss=0.3216, pruned_loss=0.1154, over 954446.48 frames. ], batch size: 54, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:22:07,231 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9583.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:22:07,844 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9584.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:22:21,969 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0290, 1.7364, 2.3501, 1.6967, 2.1093, 2.2509, 1.8051, 2.5002], device='cuda:6'), covar=tensor([0.1890, 0.2431, 0.1582, 0.2487, 0.1242, 0.1900, 0.2853, 0.1280], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0207, 0.0206, 0.0197, 0.0179, 0.0226, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:22:33,609 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.288e+02 1.890e+02 2.331e+02 2.867e+02 5.576e+02, threshold=4.662e+02, percent-clipped=3.0 2023-03-26 00:22:48,507 INFO [finetune.py:976] (6/7) Epoch 2, batch 3900, loss[loss=0.2472, simple_loss=0.2994, pruned_loss=0.09749, over 4790.00 frames. ], tot_loss[loss=0.2708, simple_loss=0.3164, pruned_loss=0.1126, over 956493.25 frames. ], batch size: 29, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:22:55,985 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9631.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:08,245 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9642.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:10,124 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9645.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:28,465 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9658.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:40,492 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=9669.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:23:46,465 INFO [finetune.py:976] (6/7) Epoch 2, batch 3950, loss[loss=0.2302, simple_loss=0.2858, pruned_loss=0.08728, over 4778.00 frames. ], tot_loss[loss=0.2663, simple_loss=0.3119, pruned_loss=0.1103, over 955743.09 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:23:48,123 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.01 vs. limit=2.0 2023-03-26 00:24:00,117 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9690.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:28,618 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.967e+02 2.302e+02 2.784e+02 7.100e+02, threshold=4.604e+02, percent-clipped=2.0 2023-03-26 00:24:29,311 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=9717.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:30,647 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9719.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:24:36,439 INFO [finetune.py:976] (6/7) Epoch 2, batch 4000, loss[loss=0.2492, simple_loss=0.3002, pruned_loss=0.09911, over 4929.00 frames. ], tot_loss[loss=0.2665, simple_loss=0.3119, pruned_loss=0.1105, over 954583.36 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:24:38,954 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 00:24:48,466 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=9743.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:25:10,944 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9770.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:25:16,247 INFO [finetune.py:976] (6/7) Epoch 2, batch 4050, loss[loss=0.2539, simple_loss=0.289, pruned_loss=0.1094, over 4262.00 frames. ], tot_loss[loss=0.2702, simple_loss=0.3157, pruned_loss=0.1123, over 953656.05 frames. ], batch size: 18, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:25:24,685 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5718, 1.3870, 1.3668, 1.5650, 1.9417, 1.5108, 0.8843, 1.2848], device='cuda:6'), covar=tensor([0.2939, 0.3046, 0.2393, 0.2219, 0.2242, 0.1526, 0.3814, 0.2243], device='cuda:6'), in_proj_covar=tensor([0.0222, 0.0204, 0.0190, 0.0176, 0.0226, 0.0169, 0.0207, 0.0179], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:25:37,138 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=9804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:25:44,426 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.258e+02 1.934e+02 2.184e+02 2.585e+02 5.351e+02, threshold=4.368e+02, percent-clipped=2.0 2023-03-26 00:25:57,342 INFO [finetune.py:976] (6/7) Epoch 2, batch 4100, loss[loss=0.2877, simple_loss=0.3361, pruned_loss=0.1197, over 4887.00 frames. ], tot_loss[loss=0.2714, simple_loss=0.317, pruned_loss=0.1129, over 952951.43 frames. ], batch size: 43, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:26:08,961 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9139, 3.3751, 3.5332, 3.7832, 3.6243, 3.3829, 3.9719, 1.2444], device='cuda:6'), covar=tensor([0.0786, 0.0769, 0.0841, 0.0893, 0.1355, 0.1436, 0.0778, 0.5132], device='cuda:6'), in_proj_covar=tensor([0.0370, 0.0246, 0.0275, 0.0297, 0.0344, 0.0288, 0.0314, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:26:09,411 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 00:26:34,086 INFO [finetune.py:976] (6/7) Epoch 2, batch 4150, loss[loss=0.2498, simple_loss=0.3047, pruned_loss=0.09741, over 4793.00 frames. ], tot_loss[loss=0.274, simple_loss=0.3184, pruned_loss=0.1148, over 952735.65 frames. ], batch size: 29, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:26:51,044 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6534, 1.8132, 1.7247, 1.0434, 2.0545, 1.9068, 1.7474, 1.6166], device='cuda:6'), covar=tensor([0.0732, 0.0613, 0.0765, 0.1033, 0.0485, 0.0740, 0.0716, 0.1123], device='cuda:6'), in_proj_covar=tensor([0.0138, 0.0131, 0.0143, 0.0129, 0.0108, 0.0140, 0.0146, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:27:03,954 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4552, 1.4835, 1.6252, 0.8950, 1.4610, 1.7188, 1.7083, 1.4896], device='cuda:6'), covar=tensor([0.1013, 0.0678, 0.0477, 0.0704, 0.0423, 0.0587, 0.0353, 0.0584], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0154, 0.0116, 0.0132, 0.0130, 0.0117, 0.0143, 0.0143], device='cuda:6'), out_proj_covar=tensor([9.5909e-05, 1.1453e-04, 8.4989e-05, 9.7049e-05, 9.4014e-05, 8.6014e-05, 1.0713e-04, 1.0583e-04], device='cuda:6') 2023-03-26 00:27:05,053 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.789e+02 2.151e+02 2.605e+02 5.306e+02, threshold=4.302e+02, percent-clipped=2.0 2023-03-26 00:27:17,338 INFO [finetune.py:976] (6/7) Epoch 2, batch 4200, loss[loss=0.2392, simple_loss=0.2971, pruned_loss=0.09059, over 4918.00 frames. ], tot_loss[loss=0.2736, simple_loss=0.3191, pruned_loss=0.1141, over 954447.33 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:27:25,691 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=9940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:28:05,868 INFO [finetune.py:976] (6/7) Epoch 2, batch 4250, loss[loss=0.2513, simple_loss=0.2937, pruned_loss=0.1045, over 4824.00 frames. ], tot_loss[loss=0.2711, simple_loss=0.3165, pruned_loss=0.1128, over 955004.01 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:28:41,984 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10014.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:28:47,605 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.798e+02 2.175e+02 2.768e+02 4.624e+02, threshold=4.351e+02, percent-clipped=3.0 2023-03-26 00:29:00,124 INFO [finetune.py:976] (6/7) Epoch 2, batch 4300, loss[loss=0.2701, simple_loss=0.3104, pruned_loss=0.1149, over 4811.00 frames. ], tot_loss[loss=0.268, simple_loss=0.3133, pruned_loss=0.1113, over 955013.95 frames. ], batch size: 45, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:29:12,680 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10039.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:29:36,709 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10062.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:29:46,277 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10070.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:29:52,924 INFO [finetune.py:976] (6/7) Epoch 2, batch 4350, loss[loss=0.215, simple_loss=0.253, pruned_loss=0.08852, over 4285.00 frames. ], tot_loss[loss=0.264, simple_loss=0.3093, pruned_loss=0.1093, over 955958.62 frames. ], batch size: 18, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:29:58,432 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4553, 1.2041, 1.8889, 2.7029, 1.7812, 2.2284, 1.2375, 2.3086], device='cuda:6'), covar=tensor([0.2042, 0.2227, 0.1416, 0.0910, 0.1121, 0.1448, 0.1876, 0.0761], device='cuda:6'), in_proj_covar=tensor([0.0105, 0.0121, 0.0140, 0.0166, 0.0106, 0.0148, 0.0132, 0.0109], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 00:30:06,664 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10099.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:30:07,344 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10100.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:30:25,355 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.331e+02 1.930e+02 2.275e+02 2.717e+02 4.483e+02, threshold=4.550e+02, percent-clipped=1.0 2023-03-26 00:30:27,187 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:30:34,433 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10123.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:30:37,372 INFO [finetune.py:976] (6/7) Epoch 2, batch 4400, loss[loss=0.3167, simple_loss=0.3595, pruned_loss=0.1369, over 4916.00 frames. ], tot_loss[loss=0.2673, simple_loss=0.3121, pruned_loss=0.1112, over 955408.25 frames. ], batch size: 36, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:31:35,879 INFO [finetune.py:976] (6/7) Epoch 2, batch 4450, loss[loss=0.2947, simple_loss=0.3466, pruned_loss=0.1214, over 4845.00 frames. ], tot_loss[loss=0.2716, simple_loss=0.3171, pruned_loss=0.113, over 955181.66 frames. ], batch size: 44, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:32:17,824 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.27 vs. limit=5.0 2023-03-26 00:32:18,678 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.270e+02 1.960e+02 2.424e+02 3.015e+02 7.276e+02, threshold=4.848e+02, percent-clipped=5.0 2023-03-26 00:32:36,845 INFO [finetune.py:976] (6/7) Epoch 2, batch 4500, loss[loss=0.2662, simple_loss=0.3169, pruned_loss=0.1078, over 4900.00 frames. ], tot_loss[loss=0.2735, simple_loss=0.3189, pruned_loss=0.1141, over 954727.99 frames. ], batch size: 37, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:32:44,223 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10240.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:33:20,855 INFO [finetune.py:976] (6/7) Epoch 2, batch 4550, loss[loss=0.2488, simple_loss=0.3092, pruned_loss=0.09422, over 4931.00 frames. ], tot_loss[loss=0.2725, simple_loss=0.3187, pruned_loss=0.1131, over 955673.56 frames. ], batch size: 42, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:33:32,452 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10288.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:33:52,582 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2882, 2.9196, 3.0163, 3.2428, 3.0487, 2.9394, 3.3685, 1.0718], device='cuda:6'), covar=tensor([0.1028, 0.0914, 0.1066, 0.0999, 0.1479, 0.1423, 0.1021, 0.4630], device='cuda:6'), in_proj_covar=tensor([0.0372, 0.0246, 0.0277, 0.0298, 0.0343, 0.0288, 0.0313, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:33:55,787 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-26 00:33:59,124 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10314.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:34:00,221 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.209e+02 1.803e+02 2.127e+02 2.576e+02 5.771e+02, threshold=4.255e+02, percent-clipped=1.0 2023-03-26 00:34:01,347 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.13 vs. limit=5.0 2023-03-26 00:34:14,432 INFO [finetune.py:976] (6/7) Epoch 2, batch 4600, loss[loss=0.2803, simple_loss=0.3302, pruned_loss=0.1152, over 4796.00 frames. ], tot_loss[loss=0.2708, simple_loss=0.3174, pruned_loss=0.1121, over 955764.00 frames. ], batch size: 51, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:34:50,214 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:34:54,861 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8388, 4.6022, 4.4713, 2.5143, 4.7730, 3.5696, 0.9540, 3.2449], device='cuda:6'), covar=tensor([0.2483, 0.1508, 0.1348, 0.3097, 0.0758, 0.0885, 0.4798, 0.1505], device='cuda:6'), in_proj_covar=tensor([0.0157, 0.0168, 0.0167, 0.0129, 0.0157, 0.0121, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 00:35:01,344 INFO [finetune.py:976] (6/7) Epoch 2, batch 4650, loss[loss=0.2295, simple_loss=0.2859, pruned_loss=0.08661, over 4759.00 frames. ], tot_loss[loss=0.2681, simple_loss=0.3144, pruned_loss=0.1109, over 957235.09 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:35:12,312 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10395.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:35:14,763 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:21,410 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10409.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:25,533 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.369e+02 1.834e+02 2.117e+02 2.478e+02 4.313e+02, threshold=4.233e+02, percent-clipped=1.0 2023-03-26 00:35:27,258 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10418.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:35:34,604 INFO [finetune.py:976] (6/7) Epoch 2, batch 4700, loss[loss=0.2505, simple_loss=0.2879, pruned_loss=0.1065, over 4892.00 frames. ], tot_loss[loss=0.2654, simple_loss=0.3107, pruned_loss=0.11, over 957623.18 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:35:38,903 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4861, 0.4673, 1.3712, 1.1796, 1.2078, 1.1307, 0.9952, 1.2966], device='cuda:6'), covar=tensor([1.2828, 2.2803, 1.7868, 2.0717, 2.0684, 1.4755, 2.5047, 1.5303], device='cuda:6'), in_proj_covar=tensor([0.0226, 0.0257, 0.0248, 0.0270, 0.0246, 0.0219, 0.0279, 0.0217], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 00:35:46,700 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:50,996 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:35:52,807 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6263, 1.6198, 1.7951, 1.9243, 1.7148, 3.6747, 1.4333, 1.7745], device='cuda:6'), covar=tensor([0.1111, 0.1675, 0.1169, 0.1098, 0.1655, 0.0302, 0.1487, 0.1778], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0080, 0.0077, 0.0079, 0.0092, 0.0082, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 00:36:01,573 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:36:07,254 INFO [finetune.py:976] (6/7) Epoch 2, batch 4750, loss[loss=0.3, simple_loss=0.3428, pruned_loss=0.1286, over 4850.00 frames. ], tot_loss[loss=0.2622, simple_loss=0.3074, pruned_loss=0.1085, over 955971.24 frames. ], batch size: 44, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:36:10,731 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6893, 1.6550, 2.2145, 3.3702, 2.3971, 2.3087, 0.8499, 2.6427], device='cuda:6'), covar=tensor([0.1898, 0.1609, 0.1333, 0.0593, 0.0873, 0.1670, 0.2280, 0.0697], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0121, 0.0140, 0.0166, 0.0107, 0.0148, 0.0132, 0.0109], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 00:36:11,775 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 00:36:35,687 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1948, 1.2683, 1.3681, 0.6338, 1.1962, 1.5575, 1.5862, 1.3458], device='cuda:6'), covar=tensor([0.0920, 0.0558, 0.0480, 0.0612, 0.0412, 0.0459, 0.0327, 0.0620], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0154, 0.0116, 0.0133, 0.0130, 0.0117, 0.0144, 0.0143], device='cuda:6'), out_proj_covar=tensor([9.6163e-05, 1.1445e-04, 8.5019e-05, 9.7706e-05, 9.4063e-05, 8.6044e-05, 1.0736e-04, 1.0580e-04], device='cuda:6') 2023-03-26 00:36:41,891 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10515.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:36:42,339 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.257e+02 1.733e+02 2.215e+02 2.656e+02 7.843e+02, threshold=4.429e+02, percent-clipped=2.0 2023-03-26 00:36:51,174 INFO [finetune.py:976] (6/7) Epoch 2, batch 4800, loss[loss=0.2777, simple_loss=0.3134, pruned_loss=0.121, over 4760.00 frames. ], tot_loss[loss=0.263, simple_loss=0.3092, pruned_loss=0.1084, over 955541.01 frames. ], batch size: 27, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:37:33,914 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7352, 0.9603, 1.3967, 1.3734, 1.2150, 1.2419, 1.3102, 1.4323], device='cuda:6'), covar=tensor([1.4763, 2.7717, 1.9835, 2.2095, 2.4906, 1.6717, 2.9286, 1.7615], device='cuda:6'), in_proj_covar=tensor([0.0225, 0.0255, 0.0247, 0.0268, 0.0244, 0.0218, 0.0278, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 00:37:43,276 INFO [finetune.py:976] (6/7) Epoch 2, batch 4850, loss[loss=0.3065, simple_loss=0.3529, pruned_loss=0.13, over 4816.00 frames. ], tot_loss[loss=0.2669, simple_loss=0.3139, pruned_loss=0.1099, over 955245.16 frames. ], batch size: 39, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:38:12,918 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-26 00:38:22,411 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.244e+02 1.945e+02 2.280e+02 2.839e+02 5.226e+02, threshold=4.560e+02, percent-clipped=1.0 2023-03-26 00:38:33,147 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-26 00:38:33,262 INFO [finetune.py:976] (6/7) Epoch 2, batch 4900, loss[loss=0.3045, simple_loss=0.3408, pruned_loss=0.1341, over 4808.00 frames. ], tot_loss[loss=0.2683, simple_loss=0.3155, pruned_loss=0.1106, over 954022.16 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:39:15,444 INFO [finetune.py:976] (6/7) Epoch 2, batch 4950, loss[loss=0.3433, simple_loss=0.3517, pruned_loss=0.1674, over 4231.00 frames. ], tot_loss[loss=0.2701, simple_loss=0.3169, pruned_loss=0.1116, over 953360.23 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:39:27,278 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10695.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:39:40,467 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.382e+02 1.832e+02 2.144e+02 2.563e+02 4.788e+02, threshold=4.289e+02, percent-clipped=1.0 2023-03-26 00:39:42,245 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=10718.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:39:48,191 INFO [finetune.py:976] (6/7) Epoch 2, batch 5000, loss[loss=0.2307, simple_loss=0.2788, pruned_loss=0.0913, over 4825.00 frames. ], tot_loss[loss=0.2678, simple_loss=0.3143, pruned_loss=0.1107, over 954205.81 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:39:59,861 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10743.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:39:59,986 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 00:40:00,514 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:00,541 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4905, 1.2617, 1.2502, 1.1751, 1.6234, 1.6452, 1.4194, 1.1824], device='cuda:6'), covar=tensor([0.0229, 0.0281, 0.0511, 0.0324, 0.0211, 0.0322, 0.0241, 0.0338], device='cuda:6'), in_proj_covar=tensor([0.0080, 0.0111, 0.0132, 0.0111, 0.0101, 0.0097, 0.0087, 0.0106], device='cuda:6'), out_proj_covar=tensor([6.2928e-05, 8.7768e-05, 1.0640e-04, 8.8257e-05, 8.0411e-05, 7.2115e-05, 6.6735e-05, 8.2953e-05], device='cuda:6') 2023-03-26 00:40:01,771 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4378, 1.4682, 1.5331, 0.8880, 1.4772, 1.8248, 1.7631, 1.4845], device='cuda:6'), covar=tensor([0.0931, 0.0606, 0.0530, 0.0686, 0.0443, 0.0408, 0.0314, 0.0535], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0154, 0.0116, 0.0134, 0.0130, 0.0117, 0.0144, 0.0142], device='cuda:6'), out_proj_covar=tensor([9.6425e-05, 1.1481e-04, 8.5282e-05, 9.8304e-05, 9.4236e-05, 8.6325e-05, 1.0790e-04, 1.0559e-04], device='cuda:6') 2023-03-26 00:40:13,892 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:14,501 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=10766.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:40:19,857 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:25,991 INFO [finetune.py:976] (6/7) Epoch 2, batch 5050, loss[loss=0.2077, simple_loss=0.2705, pruned_loss=0.07248, over 4859.00 frames. ], tot_loss[loss=0.2631, simple_loss=0.3097, pruned_loss=0.1082, over 952809.44 frames. ], batch size: 34, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:40:48,608 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10805.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:40:52,130 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=10810.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:40:54,995 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 00:40:55,640 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.711e+02 2.025e+02 2.497e+02 4.623e+02, threshold=4.049e+02, percent-clipped=1.0 2023-03-26 00:41:03,401 INFO [finetune.py:976] (6/7) Epoch 2, batch 5100, loss[loss=0.2106, simple_loss=0.2658, pruned_loss=0.07771, over 4829.00 frames. ], tot_loss[loss=0.2584, simple_loss=0.3054, pruned_loss=0.1057, over 955246.64 frames. ], batch size: 33, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:41:07,764 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10835.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:41:37,831 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.67 vs. limit=5.0 2023-03-26 00:41:47,381 INFO [finetune.py:976] (6/7) Epoch 2, batch 5150, loss[loss=0.2653, simple_loss=0.3158, pruned_loss=0.1074, over 4890.00 frames. ], tot_loss[loss=0.2579, simple_loss=0.3045, pruned_loss=0.1056, over 955142.25 frames. ], batch size: 35, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:41:54,273 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=10885.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:41:54,362 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 00:42:25,069 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.747e+02 2.089e+02 2.521e+02 5.475e+02, threshold=4.178e+02, percent-clipped=1.0 2023-03-26 00:42:37,757 INFO [finetune.py:976] (6/7) Epoch 2, batch 5200, loss[loss=0.2833, simple_loss=0.3244, pruned_loss=0.1211, over 4823.00 frames. ], tot_loss[loss=0.263, simple_loss=0.3099, pruned_loss=0.108, over 953620.78 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:42:49,070 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9300, 1.8872, 1.2678, 2.1140, 2.1192, 1.4814, 2.6110, 1.8391], device='cuda:6'), covar=tensor([0.2772, 0.5767, 0.6298, 0.5610, 0.3903, 0.2892, 0.4942, 0.3910], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0197, 0.0239, 0.0253, 0.0217, 0.0183, 0.0206, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:43:00,786 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=10946.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:43:06,228 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8225, 1.6814, 1.6398, 1.9328, 2.4453, 1.8657, 1.6219, 1.3876], device='cuda:6'), covar=tensor([0.2886, 0.2911, 0.2314, 0.2052, 0.2502, 0.1464, 0.3232, 0.2246], device='cuda:6'), in_proj_covar=tensor([0.0225, 0.0208, 0.0194, 0.0179, 0.0230, 0.0171, 0.0210, 0.0183], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:43:29,722 INFO [finetune.py:976] (6/7) Epoch 2, batch 5250, loss[loss=0.3216, simple_loss=0.3614, pruned_loss=0.1409, over 4809.00 frames. ], tot_loss[loss=0.2662, simple_loss=0.3133, pruned_loss=0.1095, over 954313.27 frames. ], batch size: 40, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:44:03,163 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.206e+02 2.014e+02 2.376e+02 2.957e+02 8.531e+02, threshold=4.753e+02, percent-clipped=3.0 2023-03-26 00:44:10,958 INFO [finetune.py:976] (6/7) Epoch 2, batch 5300, loss[loss=0.2714, simple_loss=0.3319, pruned_loss=0.1054, over 4925.00 frames. ], tot_loss[loss=0.2696, simple_loss=0.3161, pruned_loss=0.1115, over 953269.98 frames. ], batch size: 42, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:44:44,458 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:44:58,488 INFO [finetune.py:976] (6/7) Epoch 2, batch 5350, loss[loss=0.1951, simple_loss=0.2473, pruned_loss=0.07148, over 4758.00 frames. ], tot_loss[loss=0.2685, simple_loss=0.3163, pruned_loss=0.1104, over 955945.42 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:45:16,013 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11100.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:45:23,557 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11110.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:45:25,461 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:45:27,190 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.217e+02 1.830e+02 2.178e+02 2.684e+02 7.389e+02, threshold=4.357e+02, percent-clipped=1.0 2023-03-26 00:45:39,914 INFO [finetune.py:976] (6/7) Epoch 2, batch 5400, loss[loss=0.2987, simple_loss=0.3327, pruned_loss=0.1323, over 4248.00 frames. ], tot_loss[loss=0.2662, simple_loss=0.3135, pruned_loss=0.1095, over 955132.32 frames. ], batch size: 65, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:45:41,702 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:45:53,137 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1225, 0.7966, 0.9360, 1.1119, 1.1861, 1.2097, 1.0458, 0.9911], device='cuda:6'), covar=tensor([0.0289, 0.0399, 0.0587, 0.0310, 0.0308, 0.0380, 0.0275, 0.0381], device='cuda:6'), in_proj_covar=tensor([0.0081, 0.0113, 0.0134, 0.0113, 0.0103, 0.0098, 0.0088, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.3704e-05, 8.9031e-05, 1.0839e-04, 8.9664e-05, 8.1797e-05, 7.3298e-05, 6.7915e-05, 8.4234e-05], device='cuda:6') 2023-03-26 00:46:05,249 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:46:22,023 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11174.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:46:24,384 INFO [finetune.py:976] (6/7) Epoch 2, batch 5450, loss[loss=0.24, simple_loss=0.2823, pruned_loss=0.09885, over 4903.00 frames. ], tot_loss[loss=0.261, simple_loss=0.3083, pruned_loss=0.1068, over 954770.67 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:46:31,280 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 00:46:52,417 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7306, 1.1729, 1.6375, 1.4395, 1.4227, 1.4378, 1.3307, 1.5333], device='cuda:6'), covar=tensor([1.0191, 1.7583, 1.3615, 1.5471, 1.7140, 1.1741, 2.0588, 1.1734], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0259, 0.0252, 0.0271, 0.0247, 0.0220, 0.0281, 0.0219], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 00:46:55,234 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.192e+02 1.719e+02 1.995e+02 2.396e+02 4.116e+02, threshold=3.991e+02, percent-clipped=0.0 2023-03-26 00:47:11,561 INFO [finetune.py:976] (6/7) Epoch 2, batch 5500, loss[loss=0.2044, simple_loss=0.2634, pruned_loss=0.07267, over 4870.00 frames. ], tot_loss[loss=0.2564, simple_loss=0.3042, pruned_loss=0.1043, over 954779.97 frames. ], batch size: 31, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:47:21,362 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11235.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:47:30,516 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11241.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 00:48:00,494 INFO [finetune.py:976] (6/7) Epoch 2, batch 5550, loss[loss=0.3009, simple_loss=0.3453, pruned_loss=0.1282, over 4729.00 frames. ], tot_loss[loss=0.2573, simple_loss=0.3054, pruned_loss=0.1046, over 954935.85 frames. ], batch size: 59, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:48:12,158 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6537, 1.4100, 1.3596, 1.4409, 1.7828, 1.8289, 1.5002, 1.2271], device='cuda:6'), covar=tensor([0.0240, 0.0342, 0.0512, 0.0320, 0.0206, 0.0294, 0.0256, 0.0383], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0112, 0.0134, 0.0113, 0.0103, 0.0098, 0.0088, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.3824e-05, 8.8903e-05, 1.0810e-04, 8.9726e-05, 8.1758e-05, 7.2918e-05, 6.7716e-05, 8.4157e-05], device='cuda:6') 2023-03-26 00:48:20,833 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5981, 1.7835, 1.6094, 1.1446, 1.8981, 1.8127, 1.6478, 1.5569], device='cuda:6'), covar=tensor([0.0751, 0.0615, 0.0829, 0.1060, 0.0516, 0.0772, 0.0794, 0.1152], device='cuda:6'), in_proj_covar=tensor([0.0140, 0.0133, 0.0144, 0.0130, 0.0109, 0.0141, 0.0148, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 00:48:30,781 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.394e+02 1.988e+02 2.263e+02 2.636e+02 5.646e+02, threshold=4.525e+02, percent-clipped=4.0 2023-03-26 00:48:38,278 INFO [finetune.py:976] (6/7) Epoch 2, batch 5600, loss[loss=0.2488, simple_loss=0.2951, pruned_loss=0.1013, over 4832.00 frames. ], tot_loss[loss=0.259, simple_loss=0.308, pruned_loss=0.105, over 954017.62 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:49:19,620 INFO [finetune.py:976] (6/7) Epoch 2, batch 5650, loss[loss=0.2298, simple_loss=0.269, pruned_loss=0.09529, over 4021.00 frames. ], tot_loss[loss=0.2618, simple_loss=0.3111, pruned_loss=0.1062, over 954318.03 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:49:44,845 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11400.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:49:54,702 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.185e+02 1.700e+02 2.043e+02 2.333e+02 3.537e+02, threshold=4.085e+02, percent-clipped=0.0 2023-03-26 00:50:01,884 INFO [finetune.py:976] (6/7) Epoch 2, batch 5700, loss[loss=0.2432, simple_loss=0.2672, pruned_loss=0.1096, over 3958.00 frames. ], tot_loss[loss=0.2588, simple_loss=0.3068, pruned_loss=0.1054, over 937811.47 frames. ], batch size: 17, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:50:03,182 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11430.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:50:13,814 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11448.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:50:33,841 INFO [finetune.py:976] (6/7) Epoch 3, batch 0, loss[loss=0.2749, simple_loss=0.3266, pruned_loss=0.1116, over 4753.00 frames. ], tot_loss[loss=0.2749, simple_loss=0.3266, pruned_loss=0.1116, over 4753.00 frames. ], batch size: 26, lr: 3.99e-03, grad_scale: 16.0 2023-03-26 00:50:33,841 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 00:50:55,328 INFO [finetune.py:1010] (6/7) Epoch 3, validation: loss=0.1864, simple_loss=0.2566, pruned_loss=0.05807, over 2265189.00 frames. 2023-03-26 00:50:55,329 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 00:51:20,330 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11478.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:51:29,400 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3752, 1.4413, 1.3839, 1.7682, 1.4963, 3.1397, 1.2459, 1.5214], device='cuda:6'), covar=tensor([0.1134, 0.1801, 0.1382, 0.1069, 0.1758, 0.0285, 0.1652, 0.1858], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0080, 0.0078, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 00:51:35,856 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-26 00:51:38,817 INFO [finetune.py:976] (6/7) Epoch 3, batch 50, loss[loss=0.2544, simple_loss=0.2806, pruned_loss=0.1141, over 4427.00 frames. ], tot_loss[loss=0.2652, simple_loss=0.3136, pruned_loss=0.1085, over 215300.95 frames. ], batch size: 19, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:51:45,831 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.779e+02 2.075e+02 2.495e+02 4.593e+02, threshold=4.151e+02, percent-clipped=1.0 2023-03-26 00:51:54,921 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:51:56,216 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11532.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:52:02,012 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11541.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 00:52:12,049 INFO [finetune.py:976] (6/7) Epoch 3, batch 100, loss[loss=0.2214, simple_loss=0.2777, pruned_loss=0.08253, over 4850.00 frames. ], tot_loss[loss=0.2562, simple_loss=0.3037, pruned_loss=0.1043, over 379688.64 frames. ], batch size: 49, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:52:33,515 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11589.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 00:52:36,426 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:52:47,408 INFO [finetune.py:976] (6/7) Epoch 3, batch 150, loss[loss=0.2596, simple_loss=0.2976, pruned_loss=0.1108, over 4868.00 frames. ], tot_loss[loss=0.2513, simple_loss=0.298, pruned_loss=0.1023, over 508411.45 frames. ], batch size: 34, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:53:00,294 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.308e+02 1.853e+02 2.230e+02 2.582e+02 4.758e+02, threshold=4.459e+02, percent-clipped=3.0 2023-03-26 00:53:24,128 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11635.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:53:48,108 INFO [finetune.py:976] (6/7) Epoch 3, batch 200, loss[loss=0.2268, simple_loss=0.2876, pruned_loss=0.083, over 4894.00 frames. ], tot_loss[loss=0.2491, simple_loss=0.2961, pruned_loss=0.101, over 607924.92 frames. ], batch size: 32, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:54:23,116 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7471, 3.9734, 3.8222, 2.0608, 4.0513, 2.9535, 0.9674, 2.8046], device='cuda:6'), covar=tensor([0.2452, 0.1752, 0.1558, 0.3380, 0.0968, 0.0942, 0.4782, 0.1424], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0167, 0.0165, 0.0128, 0.0156, 0.0120, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 00:54:34,964 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.21 vs. limit=5.0 2023-03-26 00:54:35,516 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:54:37,314 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:54:44,850 INFO [finetune.py:976] (6/7) Epoch 3, batch 250, loss[loss=0.3335, simple_loss=0.3678, pruned_loss=0.1496, over 4827.00 frames. ], tot_loss[loss=0.2551, simple_loss=0.3017, pruned_loss=0.1042, over 684721.53 frames. ], batch size: 30, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:55:02,874 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.255e+02 1.783e+02 2.124e+02 2.629e+02 6.988e+02, threshold=4.248e+02, percent-clipped=1.0 2023-03-26 00:55:32,486 INFO [finetune.py:976] (6/7) Epoch 3, batch 300, loss[loss=0.2303, simple_loss=0.3051, pruned_loss=0.07777, over 4865.00 frames. ], tot_loss[loss=0.2585, simple_loss=0.3062, pruned_loss=0.1054, over 745613.12 frames. ], batch size: 34, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:55:40,980 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:13,666 INFO [finetune.py:976] (6/7) Epoch 3, batch 350, loss[loss=0.2122, simple_loss=0.2746, pruned_loss=0.07489, over 4760.00 frames. ], tot_loss[loss=0.2595, simple_loss=0.3078, pruned_loss=0.1056, over 792289.32 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:56:20,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.891e+02 2.265e+02 2.576e+02 3.939e+02, threshold=4.529e+02, percent-clipped=0.0 2023-03-26 00:56:20,464 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11816.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:30,465 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=11830.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:45,041 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11846.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:56:51,649 INFO [finetune.py:976] (6/7) Epoch 3, batch 400, loss[loss=0.2417, simple_loss=0.3127, pruned_loss=0.08532, over 4781.00 frames. ], tot_loss[loss=0.259, simple_loss=0.3085, pruned_loss=0.1048, over 830828.67 frames. ], batch size: 45, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:57:20,621 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11877.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:57:21,181 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=11878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:57:33,133 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:57:54,683 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 00:57:55,602 INFO [finetune.py:976] (6/7) Epoch 3, batch 450, loss[loss=0.2234, simple_loss=0.2864, pruned_loss=0.08016, over 4771.00 frames. ], tot_loss[loss=0.2574, simple_loss=0.3071, pruned_loss=0.1038, over 856706.74 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:58:01,507 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11907.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:58:12,547 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.239e+02 1.814e+02 2.317e+02 2.793e+02 4.030e+02, threshold=4.633e+02, percent-clipped=0.0 2023-03-26 00:58:15,058 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11920.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:58:20,411 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 00:58:31,366 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.68 vs. limit=2.0 2023-03-26 00:58:39,234 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11948.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:58:50,439 INFO [finetune.py:976] (6/7) Epoch 3, batch 500, loss[loss=0.2412, simple_loss=0.2921, pruned_loss=0.09512, over 4818.00 frames. ], tot_loss[loss=0.2549, simple_loss=0.3041, pruned_loss=0.1029, over 876051.83 frames. ], batch size: 25, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:59:19,998 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=11981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:27,095 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=11984.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:31,810 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=11991.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:46,861 INFO [finetune.py:976] (6/7) Epoch 3, batch 550, loss[loss=0.2214, simple_loss=0.2838, pruned_loss=0.07948, over 4783.00 frames. ], tot_loss[loss=0.2523, simple_loss=0.301, pruned_loss=0.1018, over 894229.46 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 00:59:49,272 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12009.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 00:59:51,423 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 00:59:53,393 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.776e+02 2.066e+02 2.700e+02 4.009e+02, threshold=4.133e+02, percent-clipped=0.0 2023-03-26 01:00:15,550 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12045.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:00:19,752 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-26 01:00:19,972 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8457, 1.7597, 1.2411, 2.0208, 2.0178, 1.5056, 2.4802, 1.7834], device='cuda:6'), covar=tensor([0.2607, 0.5588, 0.5887, 0.5256, 0.3849, 0.2646, 0.4867, 0.3597], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0196, 0.0240, 0.0254, 0.0218, 0.0183, 0.0207, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:00:21,765 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:00:22,312 INFO [finetune.py:976] (6/7) Epoch 3, batch 600, loss[loss=0.2507, simple_loss=0.306, pruned_loss=0.09774, over 4935.00 frames. ], tot_loss[loss=0.2532, simple_loss=0.3019, pruned_loss=0.1023, over 908238.52 frames. ], batch size: 38, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 01:00:41,420 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:01:16,809 INFO [finetune.py:976] (6/7) Epoch 3, batch 650, loss[loss=0.2591, simple_loss=0.3108, pruned_loss=0.1037, over 4751.00 frames. ], tot_loss[loss=0.2563, simple_loss=0.3056, pruned_loss=0.1036, over 918694.07 frames. ], batch size: 28, lr: 3.99e-03, grad_scale: 32.0 2023-03-26 01:01:23,449 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.901e+02 2.250e+02 2.651e+02 5.885e+02, threshold=4.501e+02, percent-clipped=2.0 2023-03-26 01:01:36,066 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9915, 4.5801, 4.3611, 2.4201, 4.6841, 3.3859, 0.8980, 3.1603], device='cuda:6'), covar=tensor([0.2484, 0.1268, 0.1399, 0.2877, 0.0729, 0.0887, 0.4928, 0.1290], device='cuda:6'), in_proj_covar=tensor([0.0157, 0.0167, 0.0166, 0.0129, 0.0156, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:01:50,166 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12137.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:02,048 INFO [finetune.py:976] (6/7) Epoch 3, batch 700, loss[loss=0.2658, simple_loss=0.326, pruned_loss=0.1028, over 4818.00 frames. ], tot_loss[loss=0.2593, simple_loss=0.309, pruned_loss=0.1048, over 926719.90 frames. ], batch size: 39, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:02:07,361 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-26 01:02:12,358 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12172.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:24,515 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12188.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:39,280 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12202.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:02:43,053 INFO [finetune.py:976] (6/7) Epoch 3, batch 750, loss[loss=0.2509, simple_loss=0.3107, pruned_loss=0.0955, over 4848.00 frames. ], tot_loss[loss=0.2592, simple_loss=0.3098, pruned_loss=0.1043, over 934960.37 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:02:54,945 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.858e+02 2.308e+02 2.738e+02 5.308e+02, threshold=4.616e+02, percent-clipped=1.0 2023-03-26 01:03:05,276 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8819, 4.3063, 4.1616, 2.3793, 4.3942, 3.1671, 0.5978, 2.9739], device='cuda:6'), covar=tensor([0.2680, 0.1217, 0.1346, 0.2926, 0.0827, 0.0980, 0.4974, 0.1361], device='cuda:6'), in_proj_covar=tensor([0.0158, 0.0169, 0.0167, 0.0130, 0.0157, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:03:14,108 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:03:30,124 INFO [finetune.py:976] (6/7) Epoch 3, batch 800, loss[loss=0.27, simple_loss=0.315, pruned_loss=0.1125, over 4810.00 frames. ], tot_loss[loss=0.258, simple_loss=0.3086, pruned_loss=0.1037, over 938826.65 frames. ], batch size: 40, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:03:32,095 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5768, 1.3909, 1.4548, 1.6241, 2.0269, 1.5576, 1.1283, 1.3359], device='cuda:6'), covar=tensor([0.2619, 0.2773, 0.2228, 0.2006, 0.2011, 0.1558, 0.3267, 0.2208], device='cuda:6'), in_proj_covar=tensor([0.0225, 0.0206, 0.0194, 0.0179, 0.0229, 0.0170, 0.0210, 0.0183], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:03:42,871 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12276.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:03,330 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12291.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:11,114 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12304.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:12,243 INFO [finetune.py:976] (6/7) Epoch 3, batch 850, loss[loss=0.2187, simple_loss=0.2696, pruned_loss=0.08388, over 4781.00 frames. ], tot_loss[loss=0.2564, simple_loss=0.3064, pruned_loss=0.1032, over 942721.94 frames. ], batch size: 29, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:04:19,499 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.224e+02 1.828e+02 2.109e+02 2.576e+02 5.946e+02, threshold=4.217e+02, percent-clipped=1.0 2023-03-26 01:04:46,218 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12339.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:04:46,836 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12340.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:05:06,378 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:05:06,917 INFO [finetune.py:976] (6/7) Epoch 3, batch 900, loss[loss=0.2015, simple_loss=0.2622, pruned_loss=0.07038, over 4903.00 frames. ], tot_loss[loss=0.2526, simple_loss=0.3026, pruned_loss=0.1013, over 946919.81 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:05:09,334 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1155, 2.4755, 2.3202, 1.3214, 2.6928, 2.3589, 2.0626, 2.2829], device='cuda:6'), covar=tensor([0.0830, 0.1382, 0.2093, 0.3128, 0.1901, 0.1799, 0.2055, 0.1589], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0195, 0.0203, 0.0188, 0.0214, 0.0208, 0.0215, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:05:29,664 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4898, 1.7294, 1.8714, 1.8971, 1.8409, 4.1736, 1.6315, 1.8734], device='cuda:6'), covar=tensor([0.1082, 0.1642, 0.1173, 0.1025, 0.1473, 0.0269, 0.1317, 0.1630], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 01:05:43,766 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12403.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:05:45,598 INFO [finetune.py:976] (6/7) Epoch 3, batch 950, loss[loss=0.2547, simple_loss=0.3021, pruned_loss=0.1036, over 4856.00 frames. ], tot_loss[loss=0.2512, simple_loss=0.3013, pruned_loss=0.1005, over 950292.72 frames. ], batch size: 31, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:05:57,754 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.779e+02 2.123e+02 2.532e+02 4.452e+02, threshold=4.246e+02, percent-clipped=1.0 2023-03-26 01:06:12,423 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:06:19,163 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7064, 1.3856, 1.2203, 1.2267, 1.2904, 1.2849, 1.2650, 2.2214], device='cuda:6'), covar=tensor([1.5119, 1.3437, 1.1097, 1.4759, 1.1932, 0.7773, 1.3977, 0.4309], device='cuda:6'), in_proj_covar=tensor([0.0263, 0.0241, 0.0216, 0.0277, 0.0231, 0.0193, 0.0235, 0.0179], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:06:22,021 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12446.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:06:28,912 INFO [finetune.py:976] (6/7) Epoch 3, batch 1000, loss[loss=0.2579, simple_loss=0.3004, pruned_loss=0.1077, over 4784.00 frames. ], tot_loss[loss=0.2545, simple_loss=0.3038, pruned_loss=0.1026, over 951555.21 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:06:39,166 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12472.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:17,558 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12502.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:19,893 INFO [finetune.py:976] (6/7) Epoch 3, batch 1050, loss[loss=0.2683, simple_loss=0.3261, pruned_loss=0.1053, over 4907.00 frames. ], tot_loss[loss=0.2569, simple_loss=0.3062, pruned_loss=0.1038, over 951888.45 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:07:20,630 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12507.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:07:31,081 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.261e+02 2.008e+02 2.380e+02 2.733e+02 7.204e+02, threshold=4.759e+02, percent-clipped=3.0 2023-03-26 01:07:38,120 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12520.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:07,544 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6230, 1.5211, 1.6095, 1.6452, 1.0213, 3.2955, 1.2040, 1.6078], device='cuda:6'), covar=tensor([0.3322, 0.2394, 0.1955, 0.2303, 0.2077, 0.0192, 0.2782, 0.1557], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0110, 0.0114, 0.0118, 0.0115, 0.0096, 0.0099, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:08:10,399 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:17,959 INFO [finetune.py:976] (6/7) Epoch 3, batch 1100, loss[loss=0.2428, simple_loss=0.286, pruned_loss=0.09978, over 4752.00 frames. ], tot_loss[loss=0.2572, simple_loss=0.3072, pruned_loss=0.1036, over 952659.10 frames. ], batch size: 59, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:08:35,479 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12576.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:39,113 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0933, 1.3520, 0.9605, 1.2931, 1.4711, 2.5119, 1.1412, 1.4704], device='cuda:6'), covar=tensor([0.1077, 0.1839, 0.1302, 0.1040, 0.1727, 0.0418, 0.1627, 0.1829], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0080, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 01:08:53,950 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8859, 1.6764, 1.3373, 2.0158, 1.9514, 1.5053, 2.2931, 1.8572], device='cuda:6'), covar=tensor([0.2537, 0.5546, 0.5390, 0.4731, 0.3562, 0.2601, 0.5224, 0.3417], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0197, 0.0239, 0.0254, 0.0219, 0.0184, 0.0208, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:08:58,282 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12604.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:08:59,396 INFO [finetune.py:976] (6/7) Epoch 3, batch 1150, loss[loss=0.2426, simple_loss=0.3014, pruned_loss=0.09187, over 4908.00 frames. ], tot_loss[loss=0.2575, simple_loss=0.3083, pruned_loss=0.1034, over 955541.48 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:09:11,543 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.712e+02 1.953e+02 2.432e+02 5.551e+02, threshold=3.906e+02, percent-clipped=1.0 2023-03-26 01:09:21,033 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:09:42,280 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:10:00,915 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12652.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:10:03,873 INFO [finetune.py:976] (6/7) Epoch 3, batch 1200, loss[loss=0.2584, simple_loss=0.3028, pruned_loss=0.107, over 4076.00 frames. ], tot_loss[loss=0.2567, simple_loss=0.307, pruned_loss=0.1032, over 952250.04 frames. ], batch size: 17, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:10:05,756 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7168, 1.1547, 1.5186, 1.4842, 1.3514, 1.3594, 1.3479, 1.4277], device='cuda:6'), covar=tensor([0.9942, 1.8160, 1.3574, 1.5244, 1.6845, 1.1790, 1.9780, 1.2181], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0257, 0.0253, 0.0269, 0.0245, 0.0219, 0.0280, 0.0219], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:10:31,626 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12688.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:10:34,132 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7218, 0.8963, 1.4324, 1.4006, 1.3013, 1.3341, 1.2366, 1.3668], device='cuda:6'), covar=tensor([0.9974, 1.8867, 1.4189, 1.5493, 1.6699, 1.2600, 2.0787, 1.2191], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0256, 0.0252, 0.0269, 0.0245, 0.0219, 0.0279, 0.0218], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:10:43,571 INFO [finetune.py:976] (6/7) Epoch 3, batch 1250, loss[loss=0.294, simple_loss=0.3356, pruned_loss=0.1262, over 4899.00 frames. ], tot_loss[loss=0.2553, simple_loss=0.3049, pruned_loss=0.1028, over 954950.96 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:10:51,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.791e+02 2.152e+02 2.550e+02 3.946e+02, threshold=4.304e+02, percent-clipped=1.0 2023-03-26 01:11:06,475 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=12732.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:11:25,782 INFO [finetune.py:976] (6/7) Epoch 3, batch 1300, loss[loss=0.1781, simple_loss=0.2403, pruned_loss=0.05797, over 4777.00 frames. ], tot_loss[loss=0.2502, simple_loss=0.3004, pruned_loss=0.1001, over 955791.55 frames. ], batch size: 28, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:11:31,223 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=12763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:11:48,963 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=12780.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:11:49,029 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1620, 1.9491, 2.1108, 1.1204, 2.2020, 2.5467, 2.0583, 1.9774], device='cuda:6'), covar=tensor([0.1372, 0.0855, 0.0577, 0.0800, 0.0491, 0.0430, 0.0532, 0.0601], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0155, 0.0117, 0.0134, 0.0131, 0.0118, 0.0145, 0.0142], device='cuda:6'), out_proj_covar=tensor([9.6972e-05, 1.1541e-04, 8.5238e-05, 9.8107e-05, 9.4994e-05, 8.6909e-05, 1.0834e-04, 1.0553e-04], device='cuda:6') 2023-03-26 01:11:57,364 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6718, 1.6547, 1.5854, 1.7718, 1.1509, 4.0573, 1.6024, 2.1397], device='cuda:6'), covar=tensor([0.3765, 0.2461, 0.2186, 0.2365, 0.1964, 0.0137, 0.2799, 0.1491], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0110, 0.0115, 0.0118, 0.0115, 0.0097, 0.0100, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:12:19,524 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=12802.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:12:22,424 INFO [finetune.py:976] (6/7) Epoch 3, batch 1350, loss[loss=0.2991, simple_loss=0.3291, pruned_loss=0.1345, over 4146.00 frames. ], tot_loss[loss=0.2504, simple_loss=0.3005, pruned_loss=0.1002, over 955692.82 frames. ], batch size: 65, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:12:40,683 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.832e+02 2.182e+02 2.674e+02 3.468e+02, threshold=4.364e+02, percent-clipped=0.0 2023-03-26 01:12:50,672 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=12824.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 01:13:21,548 INFO [finetune.py:976] (6/7) Epoch 3, batch 1400, loss[loss=0.2852, simple_loss=0.3432, pruned_loss=0.1136, over 4904.00 frames. ], tot_loss[loss=0.2539, simple_loss=0.3042, pruned_loss=0.1018, over 954466.62 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:14:18,956 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-26 01:14:19,107 INFO [finetune.py:976] (6/7) Epoch 3, batch 1450, loss[loss=0.2795, simple_loss=0.3272, pruned_loss=0.1159, over 4929.00 frames. ], tot_loss[loss=0.2564, simple_loss=0.3068, pruned_loss=0.103, over 955035.51 frames. ], batch size: 33, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:14:38,007 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.344e+02 1.849e+02 2.229e+02 2.789e+02 4.972e+02, threshold=4.459e+02, percent-clipped=1.0 2023-03-26 01:15:13,773 INFO [finetune.py:976] (6/7) Epoch 3, batch 1500, loss[loss=0.2584, simple_loss=0.3088, pruned_loss=0.104, over 4712.00 frames. ], tot_loss[loss=0.258, simple_loss=0.3083, pruned_loss=0.1038, over 954834.83 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:15:25,806 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2105, 1.5025, 0.9989, 2.0107, 2.4710, 1.7736, 1.7962, 2.1415], device='cuda:6'), covar=tensor([0.1442, 0.2182, 0.2186, 0.1240, 0.1972, 0.2076, 0.1385, 0.1892], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0099, 0.0117, 0.0094, 0.0124, 0.0098, 0.0100, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 01:15:57,692 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8625, 4.3466, 4.2107, 2.4098, 4.4725, 3.4506, 1.1951, 3.1538], device='cuda:6'), covar=tensor([0.2536, 0.1750, 0.1433, 0.2907, 0.0920, 0.0856, 0.4502, 0.1477], device='cuda:6'), in_proj_covar=tensor([0.0157, 0.0169, 0.0167, 0.0129, 0.0157, 0.0121, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:15:58,825 INFO [finetune.py:976] (6/7) Epoch 3, batch 1550, loss[loss=0.2737, simple_loss=0.3109, pruned_loss=0.1183, over 4847.00 frames. ], tot_loss[loss=0.255, simple_loss=0.3061, pruned_loss=0.102, over 954811.84 frames. ], batch size: 49, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:16:09,622 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.917e+01 1.840e+02 2.219e+02 2.788e+02 8.539e+02, threshold=4.437e+02, percent-clipped=2.0 2023-03-26 01:16:27,613 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 01:16:44,397 INFO [finetune.py:976] (6/7) Epoch 3, batch 1600, loss[loss=0.1991, simple_loss=0.2631, pruned_loss=0.0676, over 4932.00 frames. ], tot_loss[loss=0.2521, simple_loss=0.3028, pruned_loss=0.1007, over 956784.99 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:16:49,391 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0977, 1.9045, 2.1424, 1.0981, 2.2525, 2.5715, 2.0483, 2.1159], device='cuda:6'), covar=tensor([0.1124, 0.0773, 0.0525, 0.0732, 0.0571, 0.0433, 0.0476, 0.0481], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0156, 0.0118, 0.0135, 0.0132, 0.0118, 0.0146, 0.0143], device='cuda:6'), out_proj_covar=tensor([9.7578e-05, 1.1643e-04, 8.6208e-05, 9.9237e-05, 9.5930e-05, 8.7423e-05, 1.0866e-04, 1.0633e-04], device='cuda:6') 2023-03-26 01:17:20,322 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5427, 2.2377, 1.9486, 1.0152, 2.1273, 2.0172, 1.8174, 2.0559], device='cuda:6'), covar=tensor([0.0757, 0.0868, 0.1587, 0.2390, 0.1500, 0.2088, 0.2031, 0.1112], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0196, 0.0202, 0.0187, 0.0213, 0.0208, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:17:41,270 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=13102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:17:43,645 INFO [finetune.py:976] (6/7) Epoch 3, batch 1650, loss[loss=0.2247, simple_loss=0.2757, pruned_loss=0.08684, over 4910.00 frames. ], tot_loss[loss=0.2485, simple_loss=0.299, pruned_loss=0.09903, over 957440.96 frames. ], batch size: 36, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:17:52,795 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1729, 1.3597, 1.0277, 1.4263, 1.4283, 2.2987, 1.1950, 1.4641], device='cuda:6'), covar=tensor([0.0834, 0.1340, 0.1305, 0.0778, 0.1282, 0.0425, 0.1175, 0.1281], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0080, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 01:17:55,151 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.264e+02 1.841e+02 2.109e+02 2.450e+02 4.226e+02, threshold=4.217e+02, percent-clipped=0.0 2023-03-26 01:17:56,995 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=13119.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 01:18:32,152 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=13150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:18:41,623 INFO [finetune.py:976] (6/7) Epoch 3, batch 1700, loss[loss=0.2566, simple_loss=0.2922, pruned_loss=0.1105, over 4901.00 frames. ], tot_loss[loss=0.2479, simple_loss=0.2977, pruned_loss=0.09906, over 957222.11 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:18:45,349 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5149, 1.2977, 1.1640, 0.9186, 1.2242, 1.2456, 1.1901, 1.9018], device='cuda:6'), covar=tensor([1.3723, 1.2260, 0.9773, 1.3085, 1.0347, 0.6936, 1.2178, 0.4490], device='cuda:6'), in_proj_covar=tensor([0.0268, 0.0245, 0.0218, 0.0281, 0.0234, 0.0195, 0.0238, 0.0182], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:19:14,510 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7571, 1.5254, 1.2605, 1.4565, 1.4333, 1.3861, 1.3873, 2.3047], device='cuda:6'), covar=tensor([1.4356, 1.2782, 1.0587, 1.4510, 1.1632, 0.7255, 1.3546, 0.4190], device='cuda:6'), in_proj_covar=tensor([0.0267, 0.0244, 0.0217, 0.0280, 0.0233, 0.0195, 0.0237, 0.0181], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:19:22,356 INFO [finetune.py:976] (6/7) Epoch 3, batch 1750, loss[loss=0.2542, simple_loss=0.3089, pruned_loss=0.09979, over 4818.00 frames. ], tot_loss[loss=0.252, simple_loss=0.3013, pruned_loss=0.1013, over 954838.11 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:19:31,218 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.973e+02 2.314e+02 2.693e+02 6.749e+02, threshold=4.629e+02, percent-clipped=3.0 2023-03-26 01:19:53,284 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0024, 1.8900, 1.4255, 2.1373, 2.1296, 1.6389, 2.6652, 1.9207], device='cuda:6'), covar=tensor([0.2577, 0.5433, 0.5408, 0.5345, 0.3827, 0.2703, 0.4966, 0.3576], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0197, 0.0239, 0.0255, 0.0220, 0.0184, 0.0209, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:20:18,402 INFO [finetune.py:976] (6/7) Epoch 3, batch 1800, loss[loss=0.3136, simple_loss=0.3677, pruned_loss=0.1297, over 4867.00 frames. ], tot_loss[loss=0.2542, simple_loss=0.3046, pruned_loss=0.1019, over 955518.28 frames. ], batch size: 44, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:20:59,916 INFO [finetune.py:976] (6/7) Epoch 3, batch 1850, loss[loss=0.2708, simple_loss=0.3151, pruned_loss=0.1132, over 4924.00 frames. ], tot_loss[loss=0.2534, simple_loss=0.3043, pruned_loss=0.1012, over 954397.87 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:21:00,667 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6087, 1.4505, 1.4478, 1.6889, 2.1099, 1.6266, 1.0924, 1.3438], device='cuda:6'), covar=tensor([0.2673, 0.2598, 0.2118, 0.1951, 0.2121, 0.1367, 0.3292, 0.2090], device='cuda:6'), in_proj_covar=tensor([0.0227, 0.0208, 0.0195, 0.0181, 0.0230, 0.0172, 0.0212, 0.0184], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:21:08,033 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.691e+02 1.934e+02 2.488e+02 4.482e+02, threshold=3.868e+02, percent-clipped=0.0 2023-03-26 01:21:50,069 INFO [finetune.py:976] (6/7) Epoch 3, batch 1900, loss[loss=0.3248, simple_loss=0.3643, pruned_loss=0.1427, over 4847.00 frames. ], tot_loss[loss=0.2547, simple_loss=0.3063, pruned_loss=0.1015, over 957377.04 frames. ], batch size: 47, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:22:26,106 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 01:22:40,358 INFO [finetune.py:976] (6/7) Epoch 3, batch 1950, loss[loss=0.2355, simple_loss=0.2875, pruned_loss=0.09181, over 4827.00 frames. ], tot_loss[loss=0.2528, simple_loss=0.3041, pruned_loss=0.1007, over 958192.98 frames. ], batch size: 30, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:22:46,997 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.849e+02 2.191e+02 2.472e+02 6.030e+02, threshold=4.381e+02, percent-clipped=4.0 2023-03-26 01:22:48,821 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=13419.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:23:25,634 INFO [finetune.py:976] (6/7) Epoch 3, batch 2000, loss[loss=0.2237, simple_loss=0.2745, pruned_loss=0.08646, over 4766.00 frames. ], tot_loss[loss=0.2504, simple_loss=0.3009, pruned_loss=0.1, over 957318.18 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:23:36,607 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=13467.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:24:18,862 INFO [finetune.py:976] (6/7) Epoch 3, batch 2050, loss[loss=0.21, simple_loss=0.2624, pruned_loss=0.07878, over 4814.00 frames. ], tot_loss[loss=0.2479, simple_loss=0.298, pruned_loss=0.09891, over 958827.00 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:24:33,952 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.806e+02 2.206e+02 2.674e+02 5.377e+02, threshold=4.412e+02, percent-clipped=2.0 2023-03-26 01:25:00,753 INFO [finetune.py:976] (6/7) Epoch 3, batch 2100, loss[loss=0.2879, simple_loss=0.3367, pruned_loss=0.1196, over 4844.00 frames. ], tot_loss[loss=0.247, simple_loss=0.2972, pruned_loss=0.09843, over 957961.71 frames. ], batch size: 44, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:25:45,307 INFO [finetune.py:976] (6/7) Epoch 3, batch 2150, loss[loss=0.2546, simple_loss=0.314, pruned_loss=0.0976, over 4835.00 frames. ], tot_loss[loss=0.2515, simple_loss=0.3022, pruned_loss=0.1004, over 958821.28 frames. ], batch size: 47, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:26:01,366 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.918e+02 2.256e+02 2.684e+02 5.304e+02, threshold=4.512e+02, percent-clipped=2.0 2023-03-26 01:26:27,570 INFO [finetune.py:976] (6/7) Epoch 3, batch 2200, loss[loss=0.2429, simple_loss=0.3072, pruned_loss=0.08927, over 4883.00 frames. ], tot_loss[loss=0.2529, simple_loss=0.304, pruned_loss=0.1009, over 958984.41 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:27:00,298 INFO [finetune.py:976] (6/7) Epoch 3, batch 2250, loss[loss=0.2879, simple_loss=0.3439, pruned_loss=0.1159, over 4903.00 frames. ], tot_loss[loss=0.2544, simple_loss=0.3058, pruned_loss=0.1015, over 958393.25 frames. ], batch size: 36, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:27:08,393 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.927e+02 2.166e+02 2.564e+02 5.587e+02, threshold=4.333e+02, percent-clipped=2.0 2023-03-26 01:27:24,292 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=13737.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:27:42,152 INFO [finetune.py:976] (6/7) Epoch 3, batch 2300, loss[loss=0.258, simple_loss=0.3037, pruned_loss=0.1061, over 4828.00 frames. ], tot_loss[loss=0.2536, simple_loss=0.3053, pruned_loss=0.1009, over 956608.81 frames. ], batch size: 30, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:28:25,839 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=13798.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 01:28:36,681 INFO [finetune.py:976] (6/7) Epoch 3, batch 2350, loss[loss=0.21, simple_loss=0.2673, pruned_loss=0.07637, over 4868.00 frames. ], tot_loss[loss=0.2522, simple_loss=0.3031, pruned_loss=0.1006, over 956569.01 frames. ], batch size: 31, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:28:50,254 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.589e+01 1.791e+02 2.182e+02 2.579e+02 6.380e+02, threshold=4.365e+02, percent-clipped=2.0 2023-03-26 01:28:51,415 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.91 vs. limit=5.0 2023-03-26 01:29:31,312 INFO [finetune.py:976] (6/7) Epoch 3, batch 2400, loss[loss=0.2421, simple_loss=0.2846, pruned_loss=0.09978, over 4918.00 frames. ], tot_loss[loss=0.2483, simple_loss=0.2992, pruned_loss=0.09875, over 957748.86 frames. ], batch size: 43, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:30:15,689 INFO [finetune.py:976] (6/7) Epoch 3, batch 2450, loss[loss=0.2732, simple_loss=0.3188, pruned_loss=0.1139, over 4869.00 frames. ], tot_loss[loss=0.2469, simple_loss=0.2968, pruned_loss=0.09851, over 958575.06 frames. ], batch size: 34, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:30:29,106 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.953e+01 1.906e+02 2.150e+02 2.578e+02 4.181e+02, threshold=4.299e+02, percent-clipped=0.0 2023-03-26 01:30:42,092 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8092, 1.7534, 1.6880, 2.0453, 2.3431, 2.0181, 1.4828, 1.5517], device='cuda:6'), covar=tensor([0.2959, 0.2638, 0.2292, 0.2070, 0.2190, 0.1401, 0.3142, 0.2345], device='cuda:6'), in_proj_covar=tensor([0.0227, 0.0207, 0.0194, 0.0180, 0.0230, 0.0171, 0.0211, 0.0184], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:30:45,121 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5532, 1.6542, 1.9949, 1.9922, 1.9589, 4.2306, 1.4920, 1.9431], device='cuda:6'), covar=tensor([0.1083, 0.1707, 0.1257, 0.1095, 0.1480, 0.0153, 0.1521, 0.1675], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0093, 0.0083, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 01:30:59,052 INFO [finetune.py:976] (6/7) Epoch 3, batch 2500, loss[loss=0.2205, simple_loss=0.2813, pruned_loss=0.07982, over 4778.00 frames. ], tot_loss[loss=0.2493, simple_loss=0.2988, pruned_loss=0.09992, over 956187.97 frames. ], batch size: 28, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:31:33,366 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3209, 1.5754, 1.7101, 1.0148, 1.5377, 1.8134, 1.8108, 1.5646], device='cuda:6'), covar=tensor([0.0927, 0.0573, 0.0366, 0.0507, 0.0347, 0.0474, 0.0296, 0.0544], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0117, 0.0136, 0.0132, 0.0119, 0.0146, 0.0143], device='cuda:6'), out_proj_covar=tensor([9.7915e-05, 1.1691e-04, 8.5636e-05, 9.9619e-05, 9.5275e-05, 8.8165e-05, 1.0885e-04, 1.0615e-04], device='cuda:6') 2023-03-26 01:31:53,427 INFO [finetune.py:976] (6/7) Epoch 3, batch 2550, loss[loss=0.3045, simple_loss=0.3315, pruned_loss=0.1388, over 4863.00 frames. ], tot_loss[loss=0.2527, simple_loss=0.303, pruned_loss=0.1012, over 955898.88 frames. ], batch size: 31, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:32:02,446 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.924e+01 1.822e+02 2.075e+02 2.520e+02 4.375e+02, threshold=4.150e+02, percent-clipped=1.0 2023-03-26 01:32:30,592 INFO [finetune.py:976] (6/7) Epoch 3, batch 2600, loss[loss=0.2359, simple_loss=0.2974, pruned_loss=0.08722, over 4752.00 frames. ], tot_loss[loss=0.2534, simple_loss=0.3042, pruned_loss=0.1013, over 955572.13 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:32:41,360 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14072.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:33:04,947 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14093.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:33:16,411 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14103.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:33:18,152 INFO [finetune.py:976] (6/7) Epoch 3, batch 2650, loss[loss=0.2223, simple_loss=0.2804, pruned_loss=0.08212, over 4754.00 frames. ], tot_loss[loss=0.2545, simple_loss=0.3055, pruned_loss=0.1017, over 954486.64 frames. ], batch size: 27, lr: 3.98e-03, grad_scale: 16.0 2023-03-26 01:33:34,829 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.796e+02 2.195e+02 2.771e+02 4.502e+02, threshold=4.390e+02, percent-clipped=2.0 2023-03-26 01:33:37,244 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14120.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:33:56,202 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14133.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:34:00,078 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 01:34:18,022 INFO [finetune.py:976] (6/7) Epoch 3, batch 2700, loss[loss=0.3554, simple_loss=0.3679, pruned_loss=0.1714, over 4206.00 frames. ], tot_loss[loss=0.2533, simple_loss=0.3039, pruned_loss=0.1014, over 951409.49 frames. ], batch size: 65, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:34:28,765 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14164.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:34:39,805 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5964, 1.5351, 1.5156, 1.5944, 1.0231, 3.1019, 1.2235, 1.7973], device='cuda:6'), covar=tensor([0.3291, 0.2171, 0.1846, 0.2146, 0.1871, 0.0213, 0.2787, 0.1356], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0110, 0.0116, 0.0118, 0.0115, 0.0097, 0.0100, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:34:50,955 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14181.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 01:35:04,510 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2839, 2.9388, 3.0195, 3.1936, 3.0543, 2.8753, 3.3540, 1.0282], device='cuda:6'), covar=tensor([0.1171, 0.0937, 0.1106, 0.1178, 0.1700, 0.1719, 0.1080, 0.5568], device='cuda:6'), in_proj_covar=tensor([0.0369, 0.0244, 0.0276, 0.0294, 0.0340, 0.0286, 0.0313, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:35:13,614 INFO [finetune.py:976] (6/7) Epoch 3, batch 2750, loss[loss=0.2116, simple_loss=0.2612, pruned_loss=0.08103, over 4774.00 frames. ], tot_loss[loss=0.2489, simple_loss=0.2994, pruned_loss=0.09921, over 954035.96 frames. ], batch size: 28, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:35:20,819 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.010e+02 1.613e+02 1.949e+02 2.418e+02 3.837e+02, threshold=3.898e+02, percent-clipped=0.0 2023-03-26 01:35:22,088 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 01:35:29,720 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5811, 3.7331, 3.7245, 1.9713, 3.8824, 2.9938, 0.8416, 2.7809], device='cuda:6'), covar=tensor([0.2678, 0.1756, 0.1611, 0.3285, 0.1087, 0.0908, 0.4720, 0.1461], device='cuda:6'), in_proj_covar=tensor([0.0157, 0.0169, 0.0166, 0.0130, 0.0156, 0.0121, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:35:38,775 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 01:35:50,234 INFO [finetune.py:976] (6/7) Epoch 3, batch 2800, loss[loss=0.2463, simple_loss=0.2781, pruned_loss=0.1072, over 4539.00 frames. ], tot_loss[loss=0.2444, simple_loss=0.2948, pruned_loss=0.09697, over 953224.85 frames. ], batch size: 19, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:36:06,456 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5843, 1.5063, 1.5794, 1.4729, 1.1438, 2.9688, 1.1720, 1.6127], device='cuda:6'), covar=tensor([0.3640, 0.2518, 0.1984, 0.2384, 0.1917, 0.0260, 0.2822, 0.1442], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:36:18,438 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6704, 3.3807, 3.4105, 2.0945, 3.5773, 2.7635, 1.2050, 2.5359], device='cuda:6'), covar=tensor([0.2956, 0.1681, 0.1435, 0.2694, 0.1019, 0.0843, 0.3808, 0.1361], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0168, 0.0166, 0.0129, 0.0155, 0.0120, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:36:35,464 INFO [finetune.py:976] (6/7) Epoch 3, batch 2850, loss[loss=0.2581, simple_loss=0.3123, pruned_loss=0.1019, over 4928.00 frames. ], tot_loss[loss=0.2465, simple_loss=0.2961, pruned_loss=0.0985, over 952601.61 frames. ], batch size: 33, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:36:48,614 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.210e+02 1.714e+02 2.069e+02 2.397e+02 3.427e+02, threshold=4.138e+02, percent-clipped=0.0 2023-03-26 01:37:26,623 INFO [finetune.py:976] (6/7) Epoch 3, batch 2900, loss[loss=0.2512, simple_loss=0.3009, pruned_loss=0.1008, over 4767.00 frames. ], tot_loss[loss=0.2515, simple_loss=0.301, pruned_loss=0.101, over 952254.13 frames. ], batch size: 28, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:38:06,356 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2037, 4.4640, 4.7232, 4.9852, 4.8857, 4.7118, 5.3235, 1.5469], device='cuda:6'), covar=tensor([0.0700, 0.0722, 0.0659, 0.0850, 0.1166, 0.1215, 0.0488, 0.5180], device='cuda:6'), in_proj_covar=tensor([0.0367, 0.0244, 0.0274, 0.0295, 0.0339, 0.0286, 0.0311, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:38:06,380 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:38:25,748 INFO [finetune.py:976] (6/7) Epoch 3, batch 2950, loss[loss=0.2224, simple_loss=0.272, pruned_loss=0.08642, over 4721.00 frames. ], tot_loss[loss=0.2549, simple_loss=0.3052, pruned_loss=0.1023, over 951562.64 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:38:35,798 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14413.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:38:38,199 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.306e+02 1.859e+02 2.169e+02 2.721e+02 5.785e+02, threshold=4.339e+02, percent-clipped=3.0 2023-03-26 01:38:44,898 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2798, 2.9099, 3.0225, 3.1898, 3.0638, 2.9517, 3.3615, 0.9875], device='cuda:6'), covar=tensor([0.1075, 0.0916, 0.1025, 0.1148, 0.1479, 0.1478, 0.1015, 0.4757], device='cuda:6'), in_proj_covar=tensor([0.0365, 0.0243, 0.0273, 0.0293, 0.0337, 0.0283, 0.0309, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:38:50,261 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14428.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:39:02,085 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14441.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:39:20,291 INFO [finetune.py:976] (6/7) Epoch 3, batch 3000, loss[loss=0.2127, simple_loss=0.2611, pruned_loss=0.0822, over 4742.00 frames. ], tot_loss[loss=0.2549, simple_loss=0.3058, pruned_loss=0.102, over 952927.33 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:39:20,291 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 01:39:24,609 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6308, 1.4640, 1.9911, 2.8681, 2.0524, 2.2050, 1.0708, 2.2697], device='cuda:6'), covar=tensor([0.1754, 0.1636, 0.1202, 0.0579, 0.0861, 0.1186, 0.1813, 0.0739], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0138, 0.0165, 0.0104, 0.0145, 0.0130, 0.0107], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 01:39:24,898 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7078, 0.9183, 1.6017, 1.4933, 1.4220, 1.3751, 1.3210, 1.4734], device='cuda:6'), covar=tensor([0.9005, 1.5312, 1.1910, 1.3265, 1.4013, 1.0298, 1.7360, 1.0030], device='cuda:6'), in_proj_covar=tensor([0.0226, 0.0254, 0.0252, 0.0266, 0.0243, 0.0217, 0.0277, 0.0218], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:39:37,113 INFO [finetune.py:1010] (6/7) Epoch 3, validation: loss=0.1777, simple_loss=0.2485, pruned_loss=0.05342, over 2265189.00 frames. 2023-03-26 01:39:37,113 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 01:39:42,144 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:40:02,189 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14474.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:40:03,374 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14476.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 01:40:14,779 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5809, 1.4564, 1.4626, 1.6141, 1.0861, 3.5155, 1.3703, 1.9296], device='cuda:6'), covar=tensor([0.3523, 0.2466, 0.2079, 0.2295, 0.2065, 0.0154, 0.2710, 0.1474], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:40:15,402 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14487.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:40:36,155 INFO [finetune.py:976] (6/7) Epoch 3, batch 3050, loss[loss=0.2155, simple_loss=0.2647, pruned_loss=0.0832, over 4729.00 frames. ], tot_loss[loss=0.2556, simple_loss=0.3065, pruned_loss=0.1024, over 952281.64 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:40:47,132 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.81 vs. limit=2.0 2023-03-26 01:40:52,395 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4780, 1.3838, 1.6309, 1.8527, 1.5866, 3.2894, 1.2672, 1.6250], device='cuda:6'), covar=tensor([0.1099, 0.1858, 0.1474, 0.1078, 0.1681, 0.0259, 0.1619, 0.1824], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 01:40:52,886 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.290e+02 1.934e+02 2.277e+02 2.724e+02 4.940e+02, threshold=4.554e+02, percent-clipped=2.0 2023-03-26 01:41:03,500 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2026, 2.0567, 1.5903, 2.3204, 2.4434, 1.8494, 2.8268, 2.2336], device='cuda:6'), covar=tensor([0.2019, 0.4830, 0.5071, 0.4386, 0.2980, 0.2147, 0.4550, 0.2777], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0195, 0.0239, 0.0254, 0.0220, 0.0184, 0.0208, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:41:09,784 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 01:41:18,000 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14548.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:41:22,790 INFO [finetune.py:976] (6/7) Epoch 3, batch 3100, loss[loss=0.2127, simple_loss=0.2772, pruned_loss=0.07416, over 4830.00 frames. ], tot_loss[loss=0.2511, simple_loss=0.3027, pruned_loss=0.09981, over 954987.42 frames. ], batch size: 38, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:42:10,664 INFO [finetune.py:976] (6/7) Epoch 3, batch 3150, loss[loss=0.2813, simple_loss=0.3145, pruned_loss=0.1241, over 4844.00 frames. ], tot_loss[loss=0.2496, simple_loss=0.3003, pruned_loss=0.09946, over 956003.79 frames. ], batch size: 44, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:42:18,343 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.775e+02 2.189e+02 2.683e+02 4.981e+02, threshold=4.378e+02, percent-clipped=2.0 2023-03-26 01:42:23,138 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8520, 1.6688, 1.3400, 1.5620, 1.5582, 1.5505, 1.5654, 2.4290], device='cuda:6'), covar=tensor([1.1663, 1.1465, 0.8838, 1.1642, 0.9821, 0.5963, 1.0993, 0.3403], device='cuda:6'), in_proj_covar=tensor([0.0269, 0.0245, 0.0217, 0.0281, 0.0233, 0.0194, 0.0236, 0.0183], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:42:26,775 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4525, 1.4278, 1.6532, 1.0107, 1.4987, 1.9286, 1.7888, 1.4998], device='cuda:6'), covar=tensor([0.1413, 0.1086, 0.0571, 0.0750, 0.0584, 0.0701, 0.0580, 0.0765], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0158, 0.0118, 0.0137, 0.0133, 0.0120, 0.0147, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.8912e-05, 1.1768e-04, 8.6134e-05, 1.0024e-04, 9.6159e-05, 8.8529e-05, 1.1012e-04, 1.0648e-04], device='cuda:6') 2023-03-26 01:43:00,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7484, 1.6454, 1.2873, 1.5825, 1.5264, 1.5061, 1.5256, 2.3413], device='cuda:6'), covar=tensor([1.1747, 1.1492, 0.9116, 1.2013, 0.9923, 0.5917, 1.1809, 0.3484], device='cuda:6'), in_proj_covar=tensor([0.0270, 0.0246, 0.0218, 0.0282, 0.0234, 0.0195, 0.0237, 0.0183], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:43:00,561 INFO [finetune.py:976] (6/7) Epoch 3, batch 3200, loss[loss=0.2556, simple_loss=0.3032, pruned_loss=0.104, over 4808.00 frames. ], tot_loss[loss=0.2455, simple_loss=0.2958, pruned_loss=0.09761, over 957524.95 frames. ], batch size: 45, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:43:39,754 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-26 01:43:41,547 INFO [finetune.py:976] (6/7) Epoch 3, batch 3250, loss[loss=0.2555, simple_loss=0.3094, pruned_loss=0.1008, over 4772.00 frames. ], tot_loss[loss=0.2465, simple_loss=0.2963, pruned_loss=0.09835, over 954109.42 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:43:54,505 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.171e+02 1.718e+02 2.102e+02 2.544e+02 5.358e+02, threshold=4.204e+02, percent-clipped=1.0 2023-03-26 01:44:08,045 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14728.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:15,460 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=14740.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:31,811 INFO [finetune.py:976] (6/7) Epoch 3, batch 3300, loss[loss=0.2351, simple_loss=0.2989, pruned_loss=0.08564, over 4749.00 frames. ], tot_loss[loss=0.2491, simple_loss=0.2995, pruned_loss=0.09935, over 953849.68 frames. ], batch size: 27, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:44:35,946 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14759.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:42,767 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14769.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:48,715 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:44:48,740 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=14776.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 01:45:09,021 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=14801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:45:16,879 INFO [finetune.py:976] (6/7) Epoch 3, batch 3350, loss[loss=0.2637, simple_loss=0.3168, pruned_loss=0.1053, over 4893.00 frames. ], tot_loss[loss=0.2497, simple_loss=0.3009, pruned_loss=0.0993, over 953883.93 frames. ], batch size: 32, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:45:17,543 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14807.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:45:29,677 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.682e+02 2.084e+02 2.593e+02 4.183e+02, threshold=4.169e+02, percent-clipped=0.0 2023-03-26 01:45:39,456 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=14824.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 01:45:52,842 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=14843.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:46:01,263 INFO [finetune.py:976] (6/7) Epoch 3, batch 3400, loss[loss=0.266, simple_loss=0.3189, pruned_loss=0.1065, over 4893.00 frames. ], tot_loss[loss=0.2508, simple_loss=0.3023, pruned_loss=0.09971, over 954416.11 frames. ], batch size: 32, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:46:22,044 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 01:46:27,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4667, 1.4931, 1.5066, 1.7081, 1.5821, 2.7697, 1.3730, 1.6118], device='cuda:6'), covar=tensor([0.0887, 0.1360, 0.1365, 0.0858, 0.1247, 0.0305, 0.1210, 0.1306], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0080, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 01:46:41,114 INFO [finetune.py:976] (6/7) Epoch 3, batch 3450, loss[loss=0.2231, simple_loss=0.2695, pruned_loss=0.08839, over 4794.00 frames. ], tot_loss[loss=0.2501, simple_loss=0.3018, pruned_loss=0.09918, over 952624.49 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:46:53,151 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.352e+02 1.935e+02 2.237e+02 2.692e+02 3.962e+02, threshold=4.475e+02, percent-clipped=0.0 2023-03-26 01:47:15,083 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 01:47:18,088 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9847, 1.7872, 1.5364, 1.3756, 2.0852, 2.2924, 2.0575, 1.8280], device='cuda:6'), covar=tensor([0.0310, 0.0450, 0.0517, 0.0455, 0.0344, 0.0551, 0.0283, 0.0384], device='cuda:6'), in_proj_covar=tensor([0.0082, 0.0112, 0.0134, 0.0115, 0.0104, 0.0097, 0.0088, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.4255e-05, 8.8983e-05, 1.0777e-04, 9.1257e-05, 8.2190e-05, 7.2168e-05, 6.7739e-05, 8.3601e-05], device='cuda:6') 2023-03-26 01:47:27,611 INFO [finetune.py:976] (6/7) Epoch 3, batch 3500, loss[loss=0.2651, simple_loss=0.3132, pruned_loss=0.1085, over 4729.00 frames. ], tot_loss[loss=0.2489, simple_loss=0.2999, pruned_loss=0.09899, over 953220.43 frames. ], batch size: 54, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:47:56,143 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7867, 4.7322, 4.4138, 3.1533, 4.7647, 3.7125, 0.8681, 3.5459], device='cuda:6'), covar=tensor([0.2602, 0.1553, 0.1289, 0.2537, 0.0770, 0.0795, 0.4946, 0.1355], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0168, 0.0165, 0.0129, 0.0155, 0.0120, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:48:19,808 INFO [finetune.py:976] (6/7) Epoch 3, batch 3550, loss[loss=0.2685, simple_loss=0.307, pruned_loss=0.115, over 4911.00 frames. ], tot_loss[loss=0.2462, simple_loss=0.2967, pruned_loss=0.09778, over 954541.12 frames. ], batch size: 37, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:48:26,976 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.116e+02 1.780e+02 2.199e+02 2.800e+02 5.904e+02, threshold=4.398e+02, percent-clipped=2.0 2023-03-26 01:49:03,542 INFO [finetune.py:976] (6/7) Epoch 3, batch 3600, loss[loss=0.2546, simple_loss=0.3038, pruned_loss=0.1027, over 4904.00 frames. ], tot_loss[loss=0.2427, simple_loss=0.2934, pruned_loss=0.09594, over 956290.38 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:49:05,498 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7952, 1.6782, 2.0451, 1.4787, 1.9087, 1.9566, 1.5713, 2.1631], device='cuda:6'), covar=tensor([0.1729, 0.2251, 0.1651, 0.2107, 0.1082, 0.1865, 0.2832, 0.1069], device='cuda:6'), in_proj_covar=tensor([0.0207, 0.0206, 0.0206, 0.0198, 0.0182, 0.0228, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:49:12,116 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15069.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:49:41,805 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4280, 1.1959, 1.1480, 1.3809, 1.6042, 1.4545, 0.8031, 1.2113], device='cuda:6'), covar=tensor([0.2701, 0.2607, 0.2341, 0.2064, 0.1954, 0.1453, 0.3114, 0.2259], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0208, 0.0195, 0.0182, 0.0232, 0.0171, 0.0212, 0.0186], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:49:42,970 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=15096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:49:49,067 INFO [finetune.py:976] (6/7) Epoch 3, batch 3650, loss[loss=0.3224, simple_loss=0.3638, pruned_loss=0.1405, over 4822.00 frames. ], tot_loss[loss=0.2437, simple_loss=0.2947, pruned_loss=0.09639, over 955834.64 frames. ], batch size: 45, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:49:56,320 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.927e+02 2.238e+02 2.686e+02 4.916e+02, threshold=4.476e+02, percent-clipped=1.0 2023-03-26 01:50:00,675 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15117.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:50:08,033 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8856, 1.6633, 1.3480, 1.3753, 1.5860, 1.5714, 1.5269, 2.3827], device='cuda:6'), covar=tensor([1.0258, 0.8766, 0.7710, 0.9724, 0.7518, 0.5697, 0.8510, 0.3105], device='cuda:6'), in_proj_covar=tensor([0.0271, 0.0248, 0.0218, 0.0283, 0.0234, 0.0196, 0.0238, 0.0185], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:50:10,629 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4856, 1.3348, 1.1206, 0.9969, 1.2354, 1.2407, 1.1805, 1.8953], device='cuda:6'), covar=tensor([1.2259, 1.0573, 0.8832, 1.1150, 0.8665, 0.6257, 1.0510, 0.4153], device='cuda:6'), in_proj_covar=tensor([0.0271, 0.0249, 0.0218, 0.0283, 0.0234, 0.0196, 0.0238, 0.0185], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:50:28,378 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15143.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:50:41,688 INFO [finetune.py:976] (6/7) Epoch 3, batch 3700, loss[loss=0.2909, simple_loss=0.3273, pruned_loss=0.1273, over 4117.00 frames. ], tot_loss[loss=0.2482, simple_loss=0.2993, pruned_loss=0.09851, over 953497.99 frames. ], batch size: 65, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:51:04,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.33 vs. limit=5.0 2023-03-26 01:51:16,176 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15191.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:51:19,599 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=15195.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:51:29,142 INFO [finetune.py:976] (6/7) Epoch 3, batch 3750, loss[loss=0.2785, simple_loss=0.3328, pruned_loss=0.1121, over 4919.00 frames. ], tot_loss[loss=0.2488, simple_loss=0.3005, pruned_loss=0.09854, over 955170.30 frames. ], batch size: 33, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:51:40,515 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.801e+02 2.153e+02 2.622e+02 6.720e+02, threshold=4.305e+02, percent-clipped=1.0 2023-03-26 01:52:32,676 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6832, 3.8538, 3.6425, 1.8014, 3.9242, 2.9109, 0.7151, 2.7549], device='cuda:6'), covar=tensor([0.2438, 0.1596, 0.1445, 0.3334, 0.1104, 0.0992, 0.4668, 0.1519], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0168, 0.0165, 0.0129, 0.0155, 0.0121, 0.0146, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:52:33,791 INFO [finetune.py:976] (6/7) Epoch 3, batch 3800, loss[loss=0.2176, simple_loss=0.2706, pruned_loss=0.08229, over 4745.00 frames. ], tot_loss[loss=0.249, simple_loss=0.301, pruned_loss=0.09853, over 954672.31 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:52:33,930 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=15256.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 01:52:45,079 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-26 01:53:22,428 INFO [finetune.py:976] (6/7) Epoch 3, batch 3850, loss[loss=0.2752, simple_loss=0.3147, pruned_loss=0.1179, over 4918.00 frames. ], tot_loss[loss=0.2474, simple_loss=0.2994, pruned_loss=0.09773, over 954571.05 frames. ], batch size: 46, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:53:28,342 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 01:53:39,316 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.276e+02 1.876e+02 2.253e+02 2.579e+02 5.032e+02, threshold=4.505e+02, percent-clipped=1.0 2023-03-26 01:54:26,489 INFO [finetune.py:976] (6/7) Epoch 3, batch 3900, loss[loss=0.2145, simple_loss=0.2738, pruned_loss=0.07755, over 4826.00 frames. ], tot_loss[loss=0.2435, simple_loss=0.2954, pruned_loss=0.09577, over 954647.85 frames. ], batch size: 33, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:54:37,401 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0656, 1.8869, 1.7022, 2.1494, 1.3701, 4.5542, 1.8079, 2.4732], device='cuda:6'), covar=tensor([0.3084, 0.2369, 0.1984, 0.2073, 0.1771, 0.0096, 0.2430, 0.1271], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:54:42,302 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0530, 1.9193, 1.7892, 2.2950, 1.3147, 4.5791, 1.6533, 2.3760], device='cuda:6'), covar=tensor([0.3269, 0.2409, 0.1915, 0.1994, 0.1893, 0.0100, 0.2548, 0.1356], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0111, 0.0116, 0.0120, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 01:55:10,117 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:55:14,926 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 01:55:16,819 INFO [finetune.py:976] (6/7) Epoch 3, batch 3950, loss[loss=0.2038, simple_loss=0.2658, pruned_loss=0.0709, over 4851.00 frames. ], tot_loss[loss=0.2393, simple_loss=0.2913, pruned_loss=0.09363, over 953662.87 frames. ], batch size: 44, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:55:20,968 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6636, 1.5732, 2.1342, 3.4135, 2.4355, 2.3750, 1.0171, 2.6108], device='cuda:6'), covar=tensor([0.1913, 0.1661, 0.1466, 0.0611, 0.0800, 0.1467, 0.2070, 0.0791], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0119, 0.0138, 0.0165, 0.0104, 0.0144, 0.0130, 0.0107], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 01:55:25,254 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.678e+02 2.164e+02 2.472e+02 4.231e+02, threshold=4.328e+02, percent-clipped=0.0 2023-03-26 01:55:49,720 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0561, 1.8601, 1.4292, 0.7650, 1.5952, 1.7524, 1.5276, 1.7923], device='cuda:6'), covar=tensor([0.0767, 0.0659, 0.1401, 0.1767, 0.1303, 0.1627, 0.1922, 0.0768], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0199, 0.0205, 0.0189, 0.0218, 0.0212, 0.0217, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:55:52,629 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 01:55:54,524 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1968, 2.0280, 1.5844, 2.2062, 2.0492, 1.8311, 1.8415, 3.0265], device='cuda:6'), covar=tensor([1.1720, 1.1865, 0.9326, 1.2151, 0.9653, 0.6338, 1.1364, 0.3375], device='cuda:6'), in_proj_covar=tensor([0.0273, 0.0249, 0.0218, 0.0284, 0.0235, 0.0196, 0.0239, 0.0185], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:55:59,828 INFO [finetune.py:976] (6/7) Epoch 3, batch 4000, loss[loss=0.2331, simple_loss=0.276, pruned_loss=0.09504, over 4808.00 frames. ], tot_loss[loss=0.2378, simple_loss=0.2895, pruned_loss=0.09304, over 954963.63 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:56:57,340 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7741, 3.8477, 3.6320, 1.8276, 3.9118, 2.9758, 1.1663, 2.7549], device='cuda:6'), covar=tensor([0.2588, 0.2157, 0.1666, 0.3600, 0.1058, 0.0931, 0.4336, 0.1631], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0167, 0.0164, 0.0128, 0.0155, 0.0121, 0.0146, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 01:57:05,457 INFO [finetune.py:976] (6/7) Epoch 3, batch 4050, loss[loss=0.2843, simple_loss=0.3353, pruned_loss=0.1166, over 4916.00 frames. ], tot_loss[loss=0.2424, simple_loss=0.2945, pruned_loss=0.09511, over 957599.91 frames. ], batch size: 35, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:57:15,658 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3654, 2.0908, 1.8153, 1.6100, 2.3569, 2.6554, 2.2889, 1.9843], device='cuda:6'), covar=tensor([0.0203, 0.0373, 0.0450, 0.0375, 0.0284, 0.0363, 0.0362, 0.0392], device='cuda:6'), in_proj_covar=tensor([0.0083, 0.0113, 0.0136, 0.0116, 0.0104, 0.0098, 0.0089, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.5143e-05, 8.9627e-05, 1.0949e-04, 9.1753e-05, 8.2531e-05, 7.2810e-05, 6.8905e-05, 8.4575e-05], device='cuda:6') 2023-03-26 01:57:20,269 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.142e+02 1.798e+02 2.110e+02 2.647e+02 5.396e+02, threshold=4.219e+02, percent-clipped=2.0 2023-03-26 01:57:26,212 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1837, 1.8277, 1.3759, 0.5905, 1.6165, 1.7456, 1.5896, 1.7946], device='cuda:6'), covar=tensor([0.0795, 0.0911, 0.1668, 0.2236, 0.1449, 0.2492, 0.2293, 0.0917], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0199, 0.0205, 0.0190, 0.0218, 0.0213, 0.0218, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:57:37,065 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7763, 1.6077, 2.0791, 1.5158, 1.9298, 1.9751, 1.6048, 2.1644], device='cuda:6'), covar=tensor([0.1122, 0.1783, 0.1219, 0.1624, 0.0684, 0.1249, 0.2171, 0.0721], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0205, 0.0204, 0.0196, 0.0181, 0.0225, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 01:57:58,329 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=15551.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 01:58:06,353 INFO [finetune.py:976] (6/7) Epoch 3, batch 4100, loss[loss=0.3392, simple_loss=0.3557, pruned_loss=0.1613, over 4885.00 frames. ], tot_loss[loss=0.2462, simple_loss=0.2989, pruned_loss=0.09678, over 957276.97 frames. ], batch size: 36, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:59:02,792 INFO [finetune.py:976] (6/7) Epoch 3, batch 4150, loss[loss=0.2562, simple_loss=0.3215, pruned_loss=0.09542, over 4920.00 frames. ], tot_loss[loss=0.2468, simple_loss=0.2998, pruned_loss=0.09695, over 957307.88 frames. ], batch size: 42, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 01:59:10,158 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9207, 1.2552, 1.5864, 1.6425, 1.4583, 1.5097, 1.5100, 1.6077], device='cuda:6'), covar=tensor([1.0124, 1.6823, 1.3129, 1.3764, 1.6123, 1.1088, 1.9546, 1.1643], device='cuda:6'), in_proj_covar=tensor([0.0226, 0.0253, 0.0252, 0.0265, 0.0242, 0.0216, 0.0277, 0.0218], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 01:59:10,583 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.791e+02 2.157e+02 2.467e+02 4.537e+02, threshold=4.313e+02, percent-clipped=1.0 2023-03-26 01:59:51,466 INFO [finetune.py:976] (6/7) Epoch 3, batch 4200, loss[loss=0.2695, simple_loss=0.3197, pruned_loss=0.1097, over 4760.00 frames. ], tot_loss[loss=0.2482, simple_loss=0.3012, pruned_loss=0.09755, over 953925.77 frames. ], batch size: 27, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:00:53,258 INFO [finetune.py:976] (6/7) Epoch 3, batch 4250, loss[loss=0.1951, simple_loss=0.2499, pruned_loss=0.07011, over 4726.00 frames. ], tot_loss[loss=0.2454, simple_loss=0.2981, pruned_loss=0.09637, over 955457.29 frames. ], batch size: 23, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:00:55,041 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-26 02:01:00,002 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.228e+02 1.731e+02 2.073e+02 2.469e+02 5.386e+02, threshold=4.147e+02, percent-clipped=2.0 2023-03-26 02:01:14,004 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 02:01:16,504 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.70 vs. limit=2.0 2023-03-26 02:01:31,793 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7803, 1.6556, 1.5908, 1.7609, 1.1279, 3.5347, 1.3669, 2.0522], device='cuda:6'), covar=tensor([0.3186, 0.2184, 0.1889, 0.2133, 0.1852, 0.0164, 0.2787, 0.1301], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 02:01:43,527 INFO [finetune.py:976] (6/7) Epoch 3, batch 4300, loss[loss=0.2232, simple_loss=0.2781, pruned_loss=0.08414, over 4898.00 frames. ], tot_loss[loss=0.2426, simple_loss=0.295, pruned_loss=0.09506, over 956734.27 frames. ], batch size: 32, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:02:43,904 INFO [finetune.py:976] (6/7) Epoch 3, batch 4350, loss[loss=0.1829, simple_loss=0.2431, pruned_loss=0.06134, over 4810.00 frames. ], tot_loss[loss=0.2376, simple_loss=0.2902, pruned_loss=0.09253, over 958227.60 frames. ], batch size: 25, lr: 3.98e-03, grad_scale: 32.0 2023-03-26 02:03:01,710 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.218e+02 1.712e+02 2.007e+02 2.460e+02 4.679e+02, threshold=4.015e+02, percent-clipped=1.0 2023-03-26 02:03:33,444 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=15851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:03:36,421 INFO [finetune.py:976] (6/7) Epoch 3, batch 4400, loss[loss=0.2681, simple_loss=0.3288, pruned_loss=0.1037, over 4857.00 frames. ], tot_loss[loss=0.2395, simple_loss=0.2915, pruned_loss=0.09375, over 956977.22 frames. ], batch size: 31, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:03:39,552 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6441, 1.4659, 1.4403, 1.7711, 2.0439, 1.7148, 0.9849, 1.4414], device='cuda:6'), covar=tensor([0.2497, 0.2430, 0.2155, 0.1869, 0.1792, 0.1313, 0.3184, 0.2052], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0207, 0.0195, 0.0181, 0.0231, 0.0171, 0.0211, 0.0185], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:04:05,509 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=15883.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:04:26,296 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=15899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:04:27,813 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 02:04:30,835 INFO [finetune.py:976] (6/7) Epoch 3, batch 4450, loss[loss=0.2602, simple_loss=0.3256, pruned_loss=0.09743, over 4817.00 frames. ], tot_loss[loss=0.2438, simple_loss=0.2959, pruned_loss=0.0958, over 957605.08 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:04:48,085 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.872e+02 2.224e+02 2.526e+02 5.583e+02, threshold=4.448e+02, percent-clipped=1.0 2023-03-26 02:05:16,002 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=15944.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:05:23,248 INFO [finetune.py:976] (6/7) Epoch 3, batch 4500, loss[loss=0.2524, simple_loss=0.3129, pruned_loss=0.0959, over 4823.00 frames. ], tot_loss[loss=0.2434, simple_loss=0.2963, pruned_loss=0.09522, over 957122.88 frames. ], batch size: 33, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:05:41,953 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.93 vs. limit=5.0 2023-03-26 02:06:21,358 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16001.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:06:24,310 INFO [finetune.py:976] (6/7) Epoch 3, batch 4550, loss[loss=0.2775, simple_loss=0.3259, pruned_loss=0.1146, over 4848.00 frames. ], tot_loss[loss=0.2447, simple_loss=0.2976, pruned_loss=0.09589, over 956684.92 frames. ], batch size: 44, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:06:36,869 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.237e+02 1.760e+02 2.085e+02 2.487e+02 3.865e+02, threshold=4.170e+02, percent-clipped=0.0 2023-03-26 02:07:05,377 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5917, 3.9973, 4.2361, 4.3846, 4.3164, 4.0359, 4.6964, 1.8810], device='cuda:6'), covar=tensor([0.0870, 0.0802, 0.0850, 0.0988, 0.1185, 0.1414, 0.0605, 0.4987], device='cuda:6'), in_proj_covar=tensor([0.0367, 0.0245, 0.0279, 0.0294, 0.0344, 0.0288, 0.0315, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:07:05,908 INFO [finetune.py:976] (6/7) Epoch 3, batch 4600, loss[loss=0.1763, simple_loss=0.2232, pruned_loss=0.06472, over 4125.00 frames. ], tot_loss[loss=0.2437, simple_loss=0.2969, pruned_loss=0.09526, over 956662.48 frames. ], batch size: 18, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:07:11,891 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16062.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:07:24,931 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7734, 0.9566, 1.5873, 1.4948, 1.4216, 1.4091, 1.3258, 1.4776], device='cuda:6'), covar=tensor([0.8270, 1.3824, 1.0231, 1.2594, 1.3295, 0.9547, 1.5368, 0.9829], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0255, 0.0254, 0.0266, 0.0244, 0.0218, 0.0278, 0.0220], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:07:35,212 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16085.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:07:52,132 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7292, 1.5285, 2.0817, 1.3360, 1.8713, 1.9728, 1.5776, 2.1714], device='cuda:6'), covar=tensor([0.1453, 0.2346, 0.1335, 0.2192, 0.0979, 0.1623, 0.2645, 0.0940], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0205, 0.0203, 0.0196, 0.0182, 0.0225, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:07:59,851 INFO [finetune.py:976] (6/7) Epoch 3, batch 4650, loss[loss=0.1906, simple_loss=0.2489, pruned_loss=0.06619, over 4759.00 frames. ], tot_loss[loss=0.2421, simple_loss=0.2947, pruned_loss=0.09479, over 956956.30 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:08:07,262 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.007e+02 1.817e+02 2.201e+02 2.598e+02 3.850e+02, threshold=4.403e+02, percent-clipped=0.0 2023-03-26 02:08:10,525 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.04 vs. limit=2.0 2023-03-26 02:08:11,032 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6913, 1.3656, 2.0790, 3.3075, 2.3159, 2.3423, 1.2144, 2.5907], device='cuda:6'), covar=tensor([0.1861, 0.1821, 0.1514, 0.0755, 0.0882, 0.1682, 0.1845, 0.0708], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0120, 0.0139, 0.0167, 0.0104, 0.0145, 0.0130, 0.0108], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 02:08:39,863 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16146.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:08:42,098 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16149.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:08:52,382 INFO [finetune.py:976] (6/7) Epoch 3, batch 4700, loss[loss=0.2468, simple_loss=0.2865, pruned_loss=0.1036, over 4725.00 frames. ], tot_loss[loss=0.24, simple_loss=0.2924, pruned_loss=0.09378, over 957806.86 frames. ], batch size: 54, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:09:40,787 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4772, 3.9381, 4.1587, 4.2384, 4.2701, 4.0356, 4.5382, 1.9062], device='cuda:6'), covar=tensor([0.0708, 0.0648, 0.0632, 0.0895, 0.1012, 0.1051, 0.0586, 0.4534], device='cuda:6'), in_proj_covar=tensor([0.0367, 0.0247, 0.0279, 0.0296, 0.0345, 0.0290, 0.0315, 0.0305], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:09:41,008 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 02:09:41,916 INFO [finetune.py:976] (6/7) Epoch 3, batch 4750, loss[loss=0.2348, simple_loss=0.2978, pruned_loss=0.0859, over 4924.00 frames. ], tot_loss[loss=0.2368, simple_loss=0.2892, pruned_loss=0.09216, over 957680.54 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:09:44,993 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16210.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:09:49,755 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.299e+02 1.759e+02 2.065e+02 2.416e+02 4.123e+02, threshold=4.129e+02, percent-clipped=0.0 2023-03-26 02:09:55,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.9283, 4.2930, 4.4531, 4.7595, 4.6121, 4.3102, 5.0524, 1.5377], device='cuda:6'), covar=tensor([0.0756, 0.0770, 0.0717, 0.0869, 0.1428, 0.1490, 0.0533, 0.5927], device='cuda:6'), in_proj_covar=tensor([0.0367, 0.0247, 0.0279, 0.0296, 0.0346, 0.0290, 0.0315, 0.0305], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:10:01,408 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3172, 2.1681, 1.8993, 1.0204, 2.0647, 1.8234, 1.5614, 2.0084], device='cuda:6'), covar=tensor([0.1118, 0.1007, 0.1952, 0.2458, 0.1841, 0.2528, 0.2486, 0.1310], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0199, 0.0204, 0.0191, 0.0217, 0.0211, 0.0218, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:10:03,623 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16239.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:10:16,138 INFO [finetune.py:976] (6/7) Epoch 3, batch 4800, loss[loss=0.2553, simple_loss=0.3115, pruned_loss=0.0996, over 4739.00 frames. ], tot_loss[loss=0.2396, simple_loss=0.2923, pruned_loss=0.09344, over 956917.05 frames. ], batch size: 54, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:11:10,319 INFO [finetune.py:976] (6/7) Epoch 3, batch 4850, loss[loss=0.1958, simple_loss=0.2573, pruned_loss=0.06708, over 4763.00 frames. ], tot_loss[loss=0.2431, simple_loss=0.2962, pruned_loss=0.09497, over 955826.95 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:11:18,714 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.278e+02 1.868e+02 2.289e+02 2.628e+02 4.977e+02, threshold=4.577e+02, percent-clipped=4.0 2023-03-26 02:11:53,000 INFO [finetune.py:976] (6/7) Epoch 3, batch 4900, loss[loss=0.273, simple_loss=0.3373, pruned_loss=0.1043, over 4792.00 frames. ], tot_loss[loss=0.2453, simple_loss=0.2984, pruned_loss=0.0961, over 955838.12 frames. ], batch size: 51, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:11:53,740 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16357.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:12:04,219 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16371.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:12:35,109 INFO [finetune.py:976] (6/7) Epoch 3, batch 4950, loss[loss=0.2575, simple_loss=0.2985, pruned_loss=0.1083, over 4777.00 frames. ], tot_loss[loss=0.248, simple_loss=0.3009, pruned_loss=0.09757, over 954538.72 frames. ], batch size: 25, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:12:39,241 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1558, 1.9275, 1.6305, 2.0472, 2.1541, 1.7906, 2.4511, 2.0968], device='cuda:6'), covar=tensor([0.2036, 0.4006, 0.4750, 0.4385, 0.3211, 0.2218, 0.4506, 0.2894], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0195, 0.0237, 0.0254, 0.0221, 0.0186, 0.0209, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:12:43,740 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.793e+02 2.170e+02 2.564e+02 4.726e+02, threshold=4.340e+02, percent-clipped=1.0 2023-03-26 02:12:53,380 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:12:59,269 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16441.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:13:11,535 INFO [finetune.py:976] (6/7) Epoch 3, batch 5000, loss[loss=0.2372, simple_loss=0.2874, pruned_loss=0.09354, over 4878.00 frames. ], tot_loss[loss=0.2452, simple_loss=0.2979, pruned_loss=0.09622, over 953838.45 frames. ], batch size: 32, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:13:20,176 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3006, 1.9432, 2.2407, 0.9975, 2.4486, 2.6661, 2.1941, 2.1418], device='cuda:6'), covar=tensor([0.1038, 0.0735, 0.0401, 0.0921, 0.0409, 0.0472, 0.0464, 0.0642], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0158, 0.0118, 0.0137, 0.0132, 0.0120, 0.0146, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.8407e-05, 1.1773e-04, 8.5713e-05, 1.0065e-04, 9.5649e-05, 8.8870e-05, 1.0922e-04, 1.0695e-04], device='cuda:6') 2023-03-26 02:14:07,976 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 02:14:08,365 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16505.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:14:08,895 INFO [finetune.py:976] (6/7) Epoch 3, batch 5050, loss[loss=0.1757, simple_loss=0.2533, pruned_loss=0.04903, over 4759.00 frames. ], tot_loss[loss=0.2418, simple_loss=0.2944, pruned_loss=0.09462, over 954664.20 frames. ], batch size: 28, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:14:27,728 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.680e+02 2.024e+02 2.446e+02 4.498e+02, threshold=4.048e+02, percent-clipped=1.0 2023-03-26 02:14:49,256 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16539.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:15:09,826 INFO [finetune.py:976] (6/7) Epoch 3, batch 5100, loss[loss=0.2356, simple_loss=0.2751, pruned_loss=0.09798, over 4036.00 frames. ], tot_loss[loss=0.2383, simple_loss=0.2906, pruned_loss=0.09302, over 955849.52 frames. ], batch size: 17, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:15:22,449 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9179, 1.8421, 1.6585, 2.0251, 1.9747, 1.6839, 2.3206, 1.9207], device='cuda:6'), covar=tensor([0.1650, 0.3448, 0.3408, 0.3245, 0.2545, 0.1823, 0.3122, 0.2269], device='cuda:6'), in_proj_covar=tensor([0.0163, 0.0194, 0.0235, 0.0253, 0.0219, 0.0184, 0.0208, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:15:25,868 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9072, 1.5747, 2.3883, 1.6244, 2.1152, 2.2271, 1.6468, 2.4062], device='cuda:6'), covar=tensor([0.1630, 0.2433, 0.1721, 0.2075, 0.1153, 0.1492, 0.2781, 0.1073], device='cuda:6'), in_proj_covar=tensor([0.0207, 0.0207, 0.0205, 0.0197, 0.0184, 0.0227, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:15:36,572 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16587.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:15:52,950 INFO [finetune.py:976] (6/7) Epoch 3, batch 5150, loss[loss=0.2515, simple_loss=0.3011, pruned_loss=0.1009, over 4746.00 frames. ], tot_loss[loss=0.2366, simple_loss=0.289, pruned_loss=0.09211, over 955990.77 frames. ], batch size: 54, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:16:12,056 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.762e+02 2.113e+02 2.590e+02 4.768e+02, threshold=4.226e+02, percent-clipped=2.0 2023-03-26 02:16:47,968 INFO [finetune.py:976] (6/7) Epoch 3, batch 5200, loss[loss=0.1897, simple_loss=0.2448, pruned_loss=0.06731, over 4781.00 frames. ], tot_loss[loss=0.2403, simple_loss=0.2929, pruned_loss=0.09391, over 953129.63 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:16:49,160 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16657.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:17:28,408 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 02:17:40,754 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16705.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:17:41,306 INFO [finetune.py:976] (6/7) Epoch 3, batch 5250, loss[loss=0.2232, simple_loss=0.2909, pruned_loss=0.07776, over 4859.00 frames. ], tot_loss[loss=0.2419, simple_loss=0.2951, pruned_loss=0.09437, over 955509.20 frames. ], batch size: 44, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:17:53,305 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.211e+02 1.833e+02 2.103e+02 2.647e+02 4.683e+02, threshold=4.205e+02, percent-clipped=1.0 2023-03-26 02:18:00,562 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=16727.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:18:12,278 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:18:21,890 INFO [finetune.py:976] (6/7) Epoch 3, batch 5300, loss[loss=0.2343, simple_loss=0.2856, pruned_loss=0.09144, over 4887.00 frames. ], tot_loss[loss=0.2428, simple_loss=0.2966, pruned_loss=0.09452, over 957037.13 frames. ], batch size: 43, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:18:22,038 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8261, 1.7240, 1.3459, 1.7363, 1.6436, 1.5576, 1.5703, 2.4225], device='cuda:6'), covar=tensor([1.0672, 1.1003, 0.8397, 1.0533, 0.9085, 0.5641, 1.0274, 0.3372], device='cuda:6'), in_proj_covar=tensor([0.0273, 0.0249, 0.0218, 0.0283, 0.0234, 0.0195, 0.0238, 0.0186], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:18:23,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8303, 1.1706, 0.9089, 1.7347, 2.1779, 1.2818, 1.4808, 1.6766], device='cuda:6'), covar=tensor([0.1630, 0.2373, 0.2359, 0.1336, 0.2033, 0.2263, 0.1586, 0.2155], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0099, 0.0118, 0.0094, 0.0125, 0.0098, 0.0101, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 02:18:47,004 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9725, 1.5813, 1.7287, 1.7637, 1.5756, 1.5937, 1.6754, 1.7068], device='cuda:6'), covar=tensor([1.0544, 1.4704, 1.1277, 1.4515, 1.4870, 1.0396, 1.8418, 1.0469], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0253, 0.0254, 0.0265, 0.0243, 0.0218, 0.0278, 0.0220], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:18:49,197 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16789.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:19:07,708 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=16805.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:19:08,227 INFO [finetune.py:976] (6/7) Epoch 3, batch 5350, loss[loss=0.2089, simple_loss=0.2485, pruned_loss=0.08461, over 4311.00 frames. ], tot_loss[loss=0.2424, simple_loss=0.2965, pruned_loss=0.09413, over 955249.10 frames. ], batch size: 18, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:19:21,972 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.184e+02 1.880e+02 2.227e+02 2.511e+02 3.677e+02, threshold=4.454e+02, percent-clipped=0.0 2023-03-26 02:19:45,880 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.56 vs. limit=5.0 2023-03-26 02:19:46,298 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=16853.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:19:48,085 INFO [finetune.py:976] (6/7) Epoch 3, batch 5400, loss[loss=0.2431, simple_loss=0.2836, pruned_loss=0.1013, over 4904.00 frames. ], tot_loss[loss=0.2397, simple_loss=0.2934, pruned_loss=0.09298, over 954224.63 frames. ], batch size: 32, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:19:56,052 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16868.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:20:10,900 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 02:20:31,377 INFO [finetune.py:976] (6/7) Epoch 3, batch 5450, loss[loss=0.2066, simple_loss=0.2616, pruned_loss=0.0758, over 4917.00 frames. ], tot_loss[loss=0.2371, simple_loss=0.2904, pruned_loss=0.0919, over 955925.18 frames. ], batch size: 46, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:20:31,472 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=16906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:20:32,655 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6242, 1.5760, 1.5340, 1.6725, 1.1865, 2.8330, 1.2450, 1.7318], device='cuda:6'), covar=tensor([0.3172, 0.2203, 0.1909, 0.2130, 0.1860, 0.0295, 0.2257, 0.1202], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0111, 0.0116, 0.0119, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 02:20:38,654 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.665e+02 2.000e+02 2.450e+02 4.433e+02, threshold=4.000e+02, percent-clipped=0.0 2023-03-26 02:20:46,035 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16929.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:21:14,237 INFO [finetune.py:976] (6/7) Epoch 3, batch 5500, loss[loss=0.2285, simple_loss=0.2694, pruned_loss=0.0938, over 4771.00 frames. ], tot_loss[loss=0.2345, simple_loss=0.2872, pruned_loss=0.09095, over 953162.50 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:21:24,959 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8842, 1.1670, 1.0313, 1.6717, 2.1437, 1.5053, 1.4521, 1.8760], device='cuda:6'), covar=tensor([0.1515, 0.2166, 0.2172, 0.1200, 0.1884, 0.2020, 0.1379, 0.1857], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0099, 0.0118, 0.0094, 0.0125, 0.0098, 0.0101, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 02:21:26,190 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=16967.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 02:21:43,871 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8122, 1.6932, 1.4729, 1.7448, 1.8801, 1.5229, 2.2215, 1.8474], device='cuda:6'), covar=tensor([0.2105, 0.3939, 0.4668, 0.3968, 0.3210, 0.2321, 0.3726, 0.2816], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0195, 0.0239, 0.0254, 0.0221, 0.0186, 0.0209, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:22:07,779 INFO [finetune.py:976] (6/7) Epoch 3, batch 5550, loss[loss=0.2492, simple_loss=0.2977, pruned_loss=0.1004, over 4906.00 frames. ], tot_loss[loss=0.2375, simple_loss=0.29, pruned_loss=0.09251, over 954903.28 frames. ], batch size: 37, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:22:15,688 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.712e+02 2.015e+02 2.380e+02 4.122e+02, threshold=4.030e+02, percent-clipped=1.0 2023-03-26 02:22:19,497 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5284, 1.3399, 1.6069, 0.8661, 1.5697, 1.9240, 1.6478, 1.5565], device='cuda:6'), covar=tensor([0.1238, 0.1093, 0.0628, 0.0822, 0.0562, 0.0500, 0.0611, 0.0736], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.7873e-05, 1.1700e-04, 8.5226e-05, 1.0042e-04, 9.5765e-05, 8.9761e-05, 1.0885e-04, 1.0695e-04], device='cuda:6') 2023-03-26 02:22:21,791 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17027.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:22:41,485 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4061, 1.3088, 1.0982, 1.1713, 1.6127, 1.5480, 1.4031, 1.1333], device='cuda:6'), covar=tensor([0.0279, 0.0360, 0.0684, 0.0361, 0.0229, 0.0444, 0.0329, 0.0487], device='cuda:6'), in_proj_covar=tensor([0.0085, 0.0115, 0.0139, 0.0119, 0.0106, 0.0101, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.6685e-05, 9.1046e-05, 1.1223e-04, 9.4358e-05, 8.3856e-05, 7.5011e-05, 7.1198e-05, 8.6461e-05], device='cuda:6') 2023-03-26 02:22:53,359 INFO [finetune.py:976] (6/7) Epoch 3, batch 5600, loss[loss=0.2407, simple_loss=0.2925, pruned_loss=0.09445, over 4879.00 frames. ], tot_loss[loss=0.2433, simple_loss=0.2964, pruned_loss=0.0951, over 955654.11 frames. ], batch size: 32, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:23:10,686 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17075.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:23:11,412 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.84 vs. limit=5.0 2023-03-26 02:23:26,465 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 02:23:38,819 INFO [finetune.py:976] (6/7) Epoch 3, batch 5650, loss[loss=0.2781, simple_loss=0.3216, pruned_loss=0.1173, over 4901.00 frames. ], tot_loss[loss=0.2446, simple_loss=0.2985, pruned_loss=0.09536, over 955253.62 frames. ], batch size: 35, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:23:45,804 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.755e+02 2.152e+02 2.681e+02 4.789e+02, threshold=4.305e+02, percent-clipped=1.0 2023-03-26 02:24:15,374 INFO [finetune.py:976] (6/7) Epoch 3, batch 5700, loss[loss=0.2201, simple_loss=0.2572, pruned_loss=0.09153, over 4238.00 frames. ], tot_loss[loss=0.2414, simple_loss=0.2938, pruned_loss=0.0945, over 936612.85 frames. ], batch size: 18, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:24:56,868 INFO [finetune.py:976] (6/7) Epoch 4, batch 0, loss[loss=0.2772, simple_loss=0.3265, pruned_loss=0.114, over 4912.00 frames. ], tot_loss[loss=0.2772, simple_loss=0.3265, pruned_loss=0.114, over 4912.00 frames. ], batch size: 33, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:24:56,868 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 02:25:09,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2259, 1.3461, 1.4339, 0.8302, 1.2828, 1.5848, 1.6198, 1.3711], device='cuda:6'), covar=tensor([0.1129, 0.0676, 0.0526, 0.0606, 0.0497, 0.0528, 0.0433, 0.0693], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0157, 0.0117, 0.0136, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.8181e-05, 1.1719e-04, 8.5409e-05, 1.0017e-04, 9.5355e-05, 9.0035e-05, 1.0877e-04, 1.0736e-04], device='cuda:6') 2023-03-26 02:25:18,214 INFO [finetune.py:1010] (6/7) Epoch 4, validation: loss=0.1768, simple_loss=0.2473, pruned_loss=0.0532, over 2265189.00 frames. 2023-03-26 02:25:18,215 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 02:25:22,738 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17189.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:25:55,806 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.713e+02 2.128e+02 2.708e+02 4.853e+02, threshold=4.257e+02, percent-clipped=3.0 2023-03-26 02:26:00,041 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17224.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:26:05,891 INFO [finetune.py:976] (6/7) Epoch 4, batch 50, loss[loss=0.2631, simple_loss=0.3128, pruned_loss=0.1067, over 4903.00 frames. ], tot_loss[loss=0.2465, simple_loss=0.2989, pruned_loss=0.09708, over 215350.17 frames. ], batch size: 37, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:26:29,245 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:26:36,944 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17262.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:26:55,370 INFO [finetune.py:976] (6/7) Epoch 4, batch 100, loss[loss=0.2416, simple_loss=0.2778, pruned_loss=0.1027, over 4939.00 frames. ], tot_loss[loss=0.2374, simple_loss=0.2904, pruned_loss=0.09221, over 381668.05 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:27:06,683 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3189, 1.4359, 1.4845, 0.8999, 1.3583, 1.6500, 1.6367, 1.4134], device='cuda:6'), covar=tensor([0.0903, 0.0513, 0.0398, 0.0493, 0.0367, 0.0440, 0.0278, 0.0492], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.7993e-05, 1.1700e-04, 8.5018e-05, 1.0026e-04, 9.5720e-05, 8.9530e-05, 1.0888e-04, 1.0741e-04], device='cuda:6') 2023-03-26 02:27:26,863 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.228e+02 1.697e+02 1.982e+02 2.273e+02 3.827e+02, threshold=3.964e+02, percent-clipped=0.0 2023-03-26 02:27:36,644 INFO [finetune.py:976] (6/7) Epoch 4, batch 150, loss[loss=0.2522, simple_loss=0.3017, pruned_loss=0.1013, over 4907.00 frames. ], tot_loss[loss=0.233, simple_loss=0.2852, pruned_loss=0.09035, over 510022.42 frames. ], batch size: 35, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:27:45,973 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3498, 1.3805, 1.5229, 0.9941, 1.4192, 1.6606, 1.6993, 1.3922], device='cuda:6'), covar=tensor([0.0910, 0.0640, 0.0529, 0.0485, 0.0457, 0.0473, 0.0313, 0.0613], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0158, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.8255e-05, 1.1723e-04, 8.5224e-05, 1.0054e-04, 9.5736e-05, 8.9709e-05, 1.0911e-04, 1.0776e-04], device='cuda:6') 2023-03-26 02:28:01,949 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:28:11,272 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7263, 0.8876, 1.6064, 1.4939, 1.3874, 1.4086, 1.3368, 1.4683], device='cuda:6'), covar=tensor([0.7028, 1.1531, 0.8612, 0.9726, 1.0897, 0.8118, 1.1661, 0.7907], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0255, 0.0257, 0.0267, 0.0245, 0.0220, 0.0280, 0.0222], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:28:22,987 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2066, 1.9192, 2.2763, 1.0823, 2.3944, 2.4190, 2.0403, 2.1453], device='cuda:6'), covar=tensor([0.1500, 0.1302, 0.0583, 0.0950, 0.0829, 0.1273, 0.0747, 0.1094], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0158, 0.0117, 0.0137, 0.0132, 0.0121, 0.0146, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.8306e-05, 1.1727e-04, 8.5066e-05, 1.0044e-04, 9.5887e-05, 8.9706e-05, 1.0908e-04, 1.0778e-04], device='cuda:6') 2023-03-26 02:28:23,555 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1221, 3.5565, 3.7221, 3.9779, 3.8501, 3.6328, 4.2096, 1.3851], device='cuda:6'), covar=tensor([0.0853, 0.0867, 0.0780, 0.0987, 0.1412, 0.1421, 0.0798, 0.5115], device='cuda:6'), in_proj_covar=tensor([0.0360, 0.0243, 0.0275, 0.0291, 0.0340, 0.0284, 0.0308, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:28:25,902 INFO [finetune.py:976] (6/7) Epoch 4, batch 200, loss[loss=0.1692, simple_loss=0.2371, pruned_loss=0.05064, over 4761.00 frames. ], tot_loss[loss=0.2309, simple_loss=0.2834, pruned_loss=0.08924, over 608576.44 frames. ], batch size: 26, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:28:26,021 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17383.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:28:55,776 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.314e+02 1.771e+02 2.098e+02 2.514e+02 4.657e+02, threshold=4.195e+02, percent-clipped=1.0 2023-03-26 02:29:01,227 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:29:09,066 INFO [finetune.py:976] (6/7) Epoch 4, batch 250, loss[loss=0.2572, simple_loss=0.3178, pruned_loss=0.09836, over 4891.00 frames. ], tot_loss[loss=0.2344, simple_loss=0.2878, pruned_loss=0.09049, over 685262.78 frames. ], batch size: 35, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:29:17,859 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17444.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 02:29:47,693 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.28 vs. limit=5.0 2023-03-26 02:29:49,002 INFO [finetune.py:976] (6/7) Epoch 4, batch 300, loss[loss=0.2947, simple_loss=0.3411, pruned_loss=0.1241, over 4810.00 frames. ], tot_loss[loss=0.238, simple_loss=0.2914, pruned_loss=0.09235, over 744911.59 frames. ], batch size: 38, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:29:55,492 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8269, 1.6868, 1.4450, 1.7256, 1.8805, 1.5626, 2.2394, 1.8141], device='cuda:6'), covar=tensor([0.1980, 0.3789, 0.4354, 0.3866, 0.3161, 0.2162, 0.3944, 0.2675], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0195, 0.0238, 0.0254, 0.0222, 0.0186, 0.0209, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:30:24,409 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 02:30:35,038 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.302e+02 1.964e+02 2.271e+02 2.699e+02 6.272e+02, threshold=4.542e+02, percent-clipped=2.0 2023-03-26 02:30:39,316 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17524.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:30:44,604 INFO [finetune.py:976] (6/7) Epoch 4, batch 350, loss[loss=0.2406, simple_loss=0.2968, pruned_loss=0.09215, over 4730.00 frames. ], tot_loss[loss=0.2396, simple_loss=0.2935, pruned_loss=0.09278, over 791989.74 frames. ], batch size: 59, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:30:53,138 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17545.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:31:10,233 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17562.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:31:16,311 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17572.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:31:23,349 INFO [finetune.py:976] (6/7) Epoch 4, batch 400, loss[loss=0.2067, simple_loss=0.2794, pruned_loss=0.06701, over 4737.00 frames. ], tot_loss[loss=0.241, simple_loss=0.2952, pruned_loss=0.0934, over 828548.74 frames. ], batch size: 54, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:31:46,315 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17610.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:31:51,080 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.790e+02 1.987e+02 2.567e+02 5.687e+02, threshold=3.975e+02, percent-clipped=1.0 2023-03-26 02:32:10,369 INFO [finetune.py:976] (6/7) Epoch 4, batch 450, loss[loss=0.208, simple_loss=0.2675, pruned_loss=0.07422, over 4825.00 frames. ], tot_loss[loss=0.2391, simple_loss=0.2931, pruned_loss=0.09258, over 856901.17 frames. ], batch size: 25, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:33:00,229 INFO [finetune.py:976] (6/7) Epoch 4, batch 500, loss[loss=0.2194, simple_loss=0.2759, pruned_loss=0.08145, over 4765.00 frames. ], tot_loss[loss=0.2367, simple_loss=0.2904, pruned_loss=0.09151, over 878664.49 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:33:11,497 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17700.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:33:20,223 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1492, 1.3299, 1.0350, 1.3324, 1.4793, 2.4389, 1.2259, 1.4475], device='cuda:6'), covar=tensor([0.1012, 0.1791, 0.1219, 0.1005, 0.1633, 0.0361, 0.1557, 0.1748], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0080, 0.0093, 0.0083, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 02:33:24,346 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.264e+02 1.808e+02 2.091e+02 2.485e+02 4.480e+02, threshold=4.181e+02, percent-clipped=1.0 2023-03-26 02:33:24,431 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:33:31,283 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9170, 1.7024, 2.1466, 1.5277, 1.9864, 2.1246, 1.6744, 2.4529], device='cuda:6'), covar=tensor([0.1477, 0.2064, 0.1451, 0.1947, 0.0966, 0.1517, 0.2477, 0.0798], device='cuda:6'), in_proj_covar=tensor([0.0207, 0.0207, 0.0205, 0.0198, 0.0183, 0.0226, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:33:32,333 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:33:34,060 INFO [finetune.py:976] (6/7) Epoch 4, batch 550, loss[loss=0.2284, simple_loss=0.2765, pruned_loss=0.09017, over 4909.00 frames. ], tot_loss[loss=0.2344, simple_loss=0.2879, pruned_loss=0.09046, over 896855.48 frames. ], batch size: 36, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:33:37,768 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=17739.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:34:03,325 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7121, 1.5470, 2.0571, 1.3419, 1.7077, 1.9094, 1.4968, 2.1861], device='cuda:6'), covar=tensor([0.1580, 0.2102, 0.1557, 0.2069, 0.0985, 0.1679, 0.2697, 0.0840], device='cuda:6'), in_proj_covar=tensor([0.0208, 0.0208, 0.0205, 0.0198, 0.0184, 0.0227, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:34:03,947 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17761.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:34:17,631 INFO [finetune.py:976] (6/7) Epoch 4, batch 600, loss[loss=0.2615, simple_loss=0.2918, pruned_loss=0.1156, over 3967.00 frames. ], tot_loss[loss=0.2366, simple_loss=0.2901, pruned_loss=0.09152, over 910484.10 frames. ], batch size: 17, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:34:22,576 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=17791.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:34:25,174 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.81 vs. limit=5.0 2023-03-26 02:34:41,399 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.729e+02 2.101e+02 2.480e+02 7.519e+02, threshold=4.202e+02, percent-clipped=3.0 2023-03-26 02:34:50,417 INFO [finetune.py:976] (6/7) Epoch 4, batch 650, loss[loss=0.2293, simple_loss=0.2808, pruned_loss=0.08891, over 4757.00 frames. ], tot_loss[loss=0.2394, simple_loss=0.2932, pruned_loss=0.09278, over 922570.00 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:34:59,977 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=17845.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:35:27,786 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5558, 1.4051, 1.2074, 1.0831, 1.3423, 1.3247, 1.2411, 1.9742], device='cuda:6'), covar=tensor([0.8337, 0.7908, 0.6554, 0.7888, 0.6770, 0.4620, 0.7950, 0.2821], device='cuda:6'), in_proj_covar=tensor([0.0274, 0.0249, 0.0218, 0.0283, 0.0234, 0.0195, 0.0239, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:35:31,912 INFO [finetune.py:976] (6/7) Epoch 4, batch 700, loss[loss=0.2397, simple_loss=0.2891, pruned_loss=0.09508, over 4803.00 frames. ], tot_loss[loss=0.24, simple_loss=0.2943, pruned_loss=0.09283, over 930860.82 frames. ], batch size: 51, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:35:46,368 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=17893.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:36:18,034 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.240e+02 1.804e+02 2.027e+02 2.461e+02 4.855e+02, threshold=4.055e+02, percent-clipped=2.0 2023-03-26 02:36:26,882 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6747, 3.6214, 3.4128, 1.7208, 3.7305, 2.8029, 0.7642, 2.4767], device='cuda:6'), covar=tensor([0.2312, 0.1797, 0.1440, 0.3191, 0.1014, 0.1023, 0.4521, 0.1427], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0169, 0.0163, 0.0129, 0.0154, 0.0120, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 02:36:34,750 INFO [finetune.py:976] (6/7) Epoch 4, batch 750, loss[loss=0.2744, simple_loss=0.3221, pruned_loss=0.1133, over 4807.00 frames. ], tot_loss[loss=0.2411, simple_loss=0.2957, pruned_loss=0.09325, over 937503.61 frames. ], batch size: 45, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:36:59,906 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 02:37:08,555 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=17960.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:37:30,390 INFO [finetune.py:976] (6/7) Epoch 4, batch 800, loss[loss=0.2336, simple_loss=0.2885, pruned_loss=0.08937, over 4836.00 frames. ], tot_loss[loss=0.239, simple_loss=0.2939, pruned_loss=0.09204, over 939825.85 frames. ], batch size: 49, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:38:05,563 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.784e+02 2.191e+02 2.808e+02 5.190e+02, threshold=4.382e+02, percent-clipped=3.0 2023-03-26 02:38:05,663 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18018.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:08,537 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18021.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:15,858 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-26 02:38:20,835 INFO [finetune.py:976] (6/7) Epoch 4, batch 850, loss[loss=0.2149, simple_loss=0.2673, pruned_loss=0.08129, over 4873.00 frames. ], tot_loss[loss=0.2368, simple_loss=0.2909, pruned_loss=0.09138, over 943651.34 frames. ], batch size: 34, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:38:26,591 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18039.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:38:39,174 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:45,708 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18066.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:38:57,813 INFO [finetune.py:976] (6/7) Epoch 4, batch 900, loss[loss=0.2271, simple_loss=0.2718, pruned_loss=0.0912, over 4830.00 frames. ], tot_loss[loss=0.2332, simple_loss=0.2868, pruned_loss=0.08977, over 944700.59 frames. ], batch size: 33, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:38:59,685 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18086.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:39:00,271 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18087.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:39:19,669 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.218e+02 1.720e+02 1.921e+02 2.312e+02 4.297e+02, threshold=3.842e+02, percent-clipped=0.0 2023-03-26 02:39:35,995 INFO [finetune.py:976] (6/7) Epoch 4, batch 950, loss[loss=0.2313, simple_loss=0.2972, pruned_loss=0.08268, over 4892.00 frames. ], tot_loss[loss=0.2331, simple_loss=0.2861, pruned_loss=0.09003, over 948763.33 frames. ], batch size: 43, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:39:46,542 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 02:40:16,972 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0024, 1.1581, 1.7115, 1.6544, 1.5733, 1.5065, 1.4736, 1.6017], device='cuda:6'), covar=tensor([0.6639, 1.1307, 0.9268, 0.9494, 1.0908, 0.7994, 1.2789, 0.8088], device='cuda:6'), in_proj_covar=tensor([0.0227, 0.0252, 0.0254, 0.0263, 0.0241, 0.0216, 0.0277, 0.0219], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:40:22,967 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18174.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:40:24,018 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18175.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:40:29,236 INFO [finetune.py:976] (6/7) Epoch 4, batch 1000, loss[loss=0.239, simple_loss=0.2913, pruned_loss=0.09336, over 4821.00 frames. ], tot_loss[loss=0.2354, simple_loss=0.2889, pruned_loss=0.091, over 950148.33 frames. ], batch size: 25, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:40:33,063 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4378, 1.3960, 1.1925, 1.2894, 1.6747, 1.6783, 1.4634, 1.1739], device='cuda:6'), covar=tensor([0.0251, 0.0313, 0.0692, 0.0325, 0.0218, 0.0359, 0.0283, 0.0396], device='cuda:6'), in_proj_covar=tensor([0.0084, 0.0113, 0.0137, 0.0116, 0.0104, 0.0098, 0.0091, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.5267e-05, 8.9447e-05, 1.0999e-04, 9.1831e-05, 8.2091e-05, 7.3266e-05, 6.9953e-05, 8.4778e-05], device='cuda:6') 2023-03-26 02:40:41,671 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4025, 2.2284, 1.8715, 1.0548, 2.0841, 1.8208, 1.5682, 2.0544], device='cuda:6'), covar=tensor([0.0945, 0.0905, 0.1852, 0.2312, 0.1587, 0.2641, 0.2385, 0.1183], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0199, 0.0203, 0.0190, 0.0217, 0.0210, 0.0218, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:41:07,306 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.690e+02 2.099e+02 2.471e+02 3.966e+02, threshold=4.198e+02, percent-clipped=1.0 2023-03-26 02:41:28,636 INFO [finetune.py:976] (6/7) Epoch 4, batch 1050, loss[loss=0.162, simple_loss=0.2344, pruned_loss=0.04479, over 4761.00 frames. ], tot_loss[loss=0.2357, simple_loss=0.2898, pruned_loss=0.09073, over 949558.62 frames. ], batch size: 27, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:41:29,961 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18235.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:41:30,595 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:41:31,865 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0100, 1.6459, 1.8057, 1.7749, 1.5586, 1.5419, 1.6842, 1.7233], device='cuda:6'), covar=tensor([0.9956, 1.4592, 1.0600, 1.4221, 1.5450, 1.0066, 1.7330, 0.9702], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0253, 0.0255, 0.0263, 0.0242, 0.0217, 0.0277, 0.0220], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:42:18,619 INFO [finetune.py:976] (6/7) Epoch 4, batch 1100, loss[loss=0.2125, simple_loss=0.2681, pruned_loss=0.07841, over 4019.00 frames. ], tot_loss[loss=0.2371, simple_loss=0.2911, pruned_loss=0.09154, over 951781.50 frames. ], batch size: 17, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:42:49,581 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18316.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:42:50,751 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.851e+02 2.220e+02 2.754e+02 4.687e+02, threshold=4.440e+02, percent-clipped=1.0 2023-03-26 02:43:08,651 INFO [finetune.py:976] (6/7) Epoch 4, batch 1150, loss[loss=0.2709, simple_loss=0.2945, pruned_loss=0.1236, over 4356.00 frames. ], tot_loss[loss=0.2398, simple_loss=0.2939, pruned_loss=0.09288, over 952157.76 frames. ], batch size: 19, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:43:25,367 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18356.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:44:03,368 INFO [finetune.py:976] (6/7) Epoch 4, batch 1200, loss[loss=0.228, simple_loss=0.2782, pruned_loss=0.08889, over 4908.00 frames. ], tot_loss[loss=0.2372, simple_loss=0.2913, pruned_loss=0.09155, over 954045.05 frames. ], batch size: 36, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:44:05,849 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18386.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:44:28,300 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18404.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:44:47,474 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.754e+02 2.082e+02 2.492e+02 3.668e+02, threshold=4.164e+02, percent-clipped=0.0 2023-03-26 02:45:07,133 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 02:45:07,910 INFO [finetune.py:976] (6/7) Epoch 4, batch 1250, loss[loss=0.1968, simple_loss=0.258, pruned_loss=0.06778, over 4909.00 frames. ], tot_loss[loss=0.2336, simple_loss=0.2881, pruned_loss=0.08957, over 956099.88 frames. ], batch size: 35, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:45:09,084 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18434.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:45:27,380 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7982, 1.6959, 1.4093, 1.1698, 1.9719, 2.1997, 1.9622, 1.7681], device='cuda:6'), covar=tensor([0.0280, 0.0477, 0.0722, 0.0472, 0.0322, 0.0577, 0.0263, 0.0426], device='cuda:6'), in_proj_covar=tensor([0.0084, 0.0113, 0.0138, 0.0117, 0.0104, 0.0099, 0.0091, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.5697e-05, 8.9649e-05, 1.1115e-04, 9.2050e-05, 8.2700e-05, 7.3874e-05, 7.0370e-05, 8.5705e-05], device='cuda:6') 2023-03-26 02:45:59,525 INFO [finetune.py:976] (6/7) Epoch 4, batch 1300, loss[loss=0.1805, simple_loss=0.2511, pruned_loss=0.05492, over 4772.00 frames. ], tot_loss[loss=0.2296, simple_loss=0.2841, pruned_loss=0.08752, over 957116.97 frames. ], batch size: 29, lr: 3.97e-03, grad_scale: 64.0 2023-03-26 02:46:10,680 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18490.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:46:22,662 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18508.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:46:26,900 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4027, 1.3217, 1.3683, 1.3328, 0.8222, 2.0338, 0.8085, 1.3359], device='cuda:6'), covar=tensor([0.2883, 0.1912, 0.1770, 0.2063, 0.1873, 0.0372, 0.2708, 0.1141], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 02:46:29,709 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.658e+02 2.008e+02 2.632e+02 4.281e+02, threshold=4.017e+02, percent-clipped=1.0 2023-03-26 02:46:34,682 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 02:46:36,934 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:46:37,495 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18531.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:46:41,946 INFO [finetune.py:976] (6/7) Epoch 4, batch 1350, loss[loss=0.2242, simple_loss=0.2963, pruned_loss=0.07608, over 4864.00 frames. ], tot_loss[loss=0.2318, simple_loss=0.2861, pruned_loss=0.08877, over 956709.30 frames. ], batch size: 44, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:46:48,700 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0443, 2.0298, 2.0707, 1.4229, 2.3764, 2.4723, 2.1890, 1.6926], device='cuda:6'), covar=tensor([0.0680, 0.0665, 0.0825, 0.1174, 0.0425, 0.0596, 0.0716, 0.1333], device='cuda:6'), in_proj_covar=tensor([0.0139, 0.0133, 0.0144, 0.0129, 0.0110, 0.0141, 0.0146, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:46:51,572 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5715, 1.4559, 1.2974, 1.3382, 1.7441, 1.8232, 1.5885, 1.2934], device='cuda:6'), covar=tensor([0.0261, 0.0365, 0.0620, 0.0368, 0.0233, 0.0455, 0.0312, 0.0435], device='cuda:6'), in_proj_covar=tensor([0.0084, 0.0113, 0.0137, 0.0117, 0.0104, 0.0099, 0.0091, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.5586e-05, 8.9462e-05, 1.1065e-04, 9.2285e-05, 8.2561e-05, 7.3786e-05, 7.0280e-05, 8.5540e-05], device='cuda:6') 2023-03-26 02:46:52,145 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8379, 1.7077, 1.3597, 1.7958, 1.8642, 1.5248, 2.6393, 1.7935], device='cuda:6'), covar=tensor([0.1628, 0.2874, 0.3967, 0.3512, 0.2802, 0.1848, 0.2594, 0.2413], device='cuda:6'), in_proj_covar=tensor([0.0162, 0.0192, 0.0235, 0.0251, 0.0220, 0.0183, 0.0207, 0.0186], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:46:55,246 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:47:11,786 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18569.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:47:11,816 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0280, 1.8602, 1.5018, 1.9009, 1.7501, 1.7203, 1.7415, 2.6957], device='cuda:6'), covar=tensor([0.8347, 0.8876, 0.7012, 0.9387, 0.8100, 0.4788, 0.8761, 0.2524], device='cuda:6'), in_proj_covar=tensor([0.0276, 0.0251, 0.0219, 0.0283, 0.0235, 0.0196, 0.0239, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 02:47:20,091 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 02:47:25,815 INFO [finetune.py:976] (6/7) Epoch 4, batch 1400, loss[loss=0.2383, simple_loss=0.3022, pruned_loss=0.08725, over 4915.00 frames. ], tot_loss[loss=0.2369, simple_loss=0.2912, pruned_loss=0.09129, over 956129.24 frames. ], batch size: 36, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:47:53,784 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:47:55,467 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.863e+02 2.207e+02 2.712e+02 5.337e+02, threshold=4.415e+02, percent-clipped=2.0 2023-03-26 02:48:10,033 INFO [finetune.py:976] (6/7) Epoch 4, batch 1450, loss[loss=0.1849, simple_loss=0.2464, pruned_loss=0.06168, over 4776.00 frames. ], tot_loss[loss=0.2361, simple_loss=0.2913, pruned_loss=0.09048, over 954890.60 frames. ], batch size: 29, lr: 3.97e-03, grad_scale: 32.0 2023-03-26 02:48:43,321 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18664.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:48:45,426 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 02:48:49,445 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18674.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:48:56,989 INFO [finetune.py:976] (6/7) Epoch 4, batch 1500, loss[loss=0.2556, simple_loss=0.2949, pruned_loss=0.1081, over 4149.00 frames. ], tot_loss[loss=0.2376, simple_loss=0.2925, pruned_loss=0.09138, over 954211.63 frames. ], batch size: 65, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:49:36,302 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.377e+02 1.810e+02 2.122e+02 2.509e+02 6.153e+02, threshold=4.245e+02, percent-clipped=1.0 2023-03-26 02:49:54,647 INFO [finetune.py:976] (6/7) Epoch 4, batch 1550, loss[loss=0.2394, simple_loss=0.2844, pruned_loss=0.09721, over 4694.00 frames. ], tot_loss[loss=0.2355, simple_loss=0.2908, pruned_loss=0.09015, over 954213.54 frames. ], batch size: 59, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:49:55,975 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18735.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:50:24,710 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4329, 3.8805, 3.9730, 4.2908, 4.1894, 3.8874, 4.4960, 1.2494], device='cuda:6'), covar=tensor([0.0627, 0.0693, 0.0841, 0.0755, 0.1039, 0.1340, 0.0682, 0.5109], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0242, 0.0275, 0.0290, 0.0335, 0.0283, 0.0306, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:50:39,056 INFO [finetune.py:976] (6/7) Epoch 4, batch 1600, loss[loss=0.2129, simple_loss=0.2696, pruned_loss=0.07809, over 4852.00 frames. ], tot_loss[loss=0.2341, simple_loss=0.2888, pruned_loss=0.08967, over 953891.67 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:50:42,310 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9321, 1.8421, 1.7101, 2.0019, 1.1891, 4.4640, 1.6280, 2.2446], device='cuda:6'), covar=tensor([0.3264, 0.2320, 0.2022, 0.2128, 0.1935, 0.0105, 0.2420, 0.1299], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 02:51:19,526 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.645e+02 2.011e+02 2.399e+02 5.772e+02, threshold=4.021e+02, percent-clipped=1.0 2023-03-26 02:51:30,348 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18830.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:51:30,962 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=18831.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:51:32,085 INFO [finetune.py:976] (6/7) Epoch 4, batch 1650, loss[loss=0.212, simple_loss=0.2649, pruned_loss=0.07952, over 4874.00 frames. ], tot_loss[loss=0.2326, simple_loss=0.2867, pruned_loss=0.08925, over 954240.59 frames. ], batch size: 31, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:51:48,111 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18846.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:52:05,192 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=18864.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 02:52:13,737 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9634, 1.8150, 1.7261, 2.1514, 1.3577, 4.6230, 1.7051, 2.3948], device='cuda:6'), covar=tensor([0.3729, 0.2575, 0.2154, 0.2170, 0.1939, 0.0090, 0.2527, 0.1400], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0112, 0.0117, 0.0120, 0.0117, 0.0097, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 02:52:23,037 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:52:23,635 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=18879.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:52:25,973 INFO [finetune.py:976] (6/7) Epoch 4, batch 1700, loss[loss=0.1794, simple_loss=0.2357, pruned_loss=0.06159, over 4792.00 frames. ], tot_loss[loss=0.2304, simple_loss=0.2845, pruned_loss=0.08815, over 956611.72 frames. ], batch size: 25, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:52:31,219 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 02:53:00,726 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.767e+02 2.149e+02 2.599e+02 5.673e+02, threshold=4.299e+02, percent-clipped=2.0 2023-03-26 02:53:07,535 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=18930.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:53:09,236 INFO [finetune.py:976] (6/7) Epoch 4, batch 1750, loss[loss=0.27, simple_loss=0.3298, pruned_loss=0.1051, over 4849.00 frames. ], tot_loss[loss=0.233, simple_loss=0.2873, pruned_loss=0.08933, over 954846.42 frames. ], batch size: 47, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:53:11,221 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2693, 2.0850, 1.6963, 2.3514, 2.2317, 1.8446, 2.8366, 2.1957], device='cuda:6'), covar=tensor([0.1932, 0.4282, 0.4462, 0.4240, 0.3421, 0.2142, 0.4021, 0.2691], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0194, 0.0238, 0.0255, 0.0223, 0.0185, 0.0210, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:53:33,391 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7029, 3.6887, 3.6024, 1.9520, 3.8145, 2.7878, 0.9232, 2.6422], device='cuda:6'), covar=tensor([0.2616, 0.1596, 0.1603, 0.2916, 0.0918, 0.0980, 0.4327, 0.1345], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0170, 0.0164, 0.0129, 0.0156, 0.0123, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 02:53:50,573 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5911, 3.5496, 3.4115, 1.7001, 3.6347, 2.7666, 0.9538, 2.5034], device='cuda:6'), covar=tensor([0.2738, 0.2035, 0.1799, 0.3193, 0.1079, 0.0992, 0.4400, 0.1526], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0170, 0.0164, 0.0129, 0.0156, 0.0123, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 02:53:52,363 INFO [finetune.py:976] (6/7) Epoch 4, batch 1800, loss[loss=0.2262, simple_loss=0.2907, pruned_loss=0.08087, over 4792.00 frames. ], tot_loss[loss=0.2353, simple_loss=0.2901, pruned_loss=0.0903, over 954615.32 frames. ], batch size: 51, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:53:57,517 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=18991.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:54:06,967 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6085, 2.2157, 1.9348, 1.0177, 2.2452, 1.9841, 1.7110, 2.0601], device='cuda:6'), covar=tensor([0.0947, 0.1036, 0.2023, 0.2792, 0.1785, 0.2461, 0.2474, 0.1335], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0199, 0.0202, 0.0189, 0.0216, 0.0208, 0.0218, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 02:54:32,252 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.317e+02 1.869e+02 2.115e+02 2.590e+02 5.981e+02, threshold=4.230e+02, percent-clipped=1.0 2023-03-26 02:54:50,223 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:54:52,012 INFO [finetune.py:976] (6/7) Epoch 4, batch 1850, loss[loss=0.2764, simple_loss=0.3232, pruned_loss=0.1148, over 4864.00 frames. ], tot_loss[loss=0.2377, simple_loss=0.2924, pruned_loss=0.09153, over 952259.29 frames. ], batch size: 31, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:55:39,892 INFO [finetune.py:976] (6/7) Epoch 4, batch 1900, loss[loss=0.2573, simple_loss=0.3266, pruned_loss=0.09395, over 4847.00 frames. ], tot_loss[loss=0.2376, simple_loss=0.2929, pruned_loss=0.09121, over 951874.12 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:56:12,754 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.301e+02 1.752e+02 2.082e+02 2.658e+02 3.786e+02, threshold=4.164e+02, percent-clipped=0.0 2023-03-26 02:56:25,561 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19129.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:56:33,398 INFO [finetune.py:976] (6/7) Epoch 4, batch 1950, loss[loss=0.2141, simple_loss=0.2639, pruned_loss=0.08214, over 4819.00 frames. ], tot_loss[loss=0.2346, simple_loss=0.2901, pruned_loss=0.08949, over 952741.30 frames. ], batch size: 25, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:56:41,418 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19146.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:56:53,403 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19164.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:57:07,460 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.61 vs. limit=5.0 2023-03-26 02:57:09,560 INFO [finetune.py:976] (6/7) Epoch 4, batch 2000, loss[loss=0.2194, simple_loss=0.2732, pruned_loss=0.08286, over 4724.00 frames. ], tot_loss[loss=0.2328, simple_loss=0.2878, pruned_loss=0.08888, over 954168.29 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:57:14,700 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19190.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 02:57:17,082 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19194.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:57:18,462 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 02:57:35,166 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19212.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 02:57:39,345 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.251e+02 1.681e+02 2.009e+02 2.396e+02 5.395e+02, threshold=4.017e+02, percent-clipped=3.0 2023-03-26 02:57:49,438 INFO [finetune.py:976] (6/7) Epoch 4, batch 2050, loss[loss=0.214, simple_loss=0.2718, pruned_loss=0.07809, over 4850.00 frames. ], tot_loss[loss=0.2285, simple_loss=0.2831, pruned_loss=0.08696, over 954380.00 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:58:17,527 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 02:58:20,775 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5070, 1.4342, 1.9684, 2.9970, 2.1108, 2.1635, 0.8951, 2.3992], device='cuda:6'), covar=tensor([0.1743, 0.1614, 0.1338, 0.0642, 0.0801, 0.1610, 0.1991, 0.0620], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0137, 0.0166, 0.0104, 0.0144, 0.0129, 0.0106], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 02:58:31,806 INFO [finetune.py:976] (6/7) Epoch 4, batch 2100, loss[loss=0.2349, simple_loss=0.2896, pruned_loss=0.09014, over 4817.00 frames. ], tot_loss[loss=0.2282, simple_loss=0.2826, pruned_loss=0.08688, over 954165.29 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:58:34,246 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19286.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:59:06,392 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6788, 3.8109, 3.7059, 1.7377, 3.9445, 2.8870, 0.7848, 2.6911], device='cuda:6'), covar=tensor([0.2671, 0.1582, 0.1435, 0.3444, 0.0927, 0.1011, 0.4642, 0.1550], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0169, 0.0162, 0.0128, 0.0154, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 02:59:09,862 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.737e+02 2.057e+02 2.371e+02 3.601e+02, threshold=4.115e+02, percent-clipped=0.0 2023-03-26 02:59:26,228 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19330.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 02:59:27,979 INFO [finetune.py:976] (6/7) Epoch 4, batch 2150, loss[loss=0.2362, simple_loss=0.303, pruned_loss=0.08468, over 4893.00 frames. ], tot_loss[loss=0.2322, simple_loss=0.2866, pruned_loss=0.08892, over 950771.19 frames. ], batch size: 32, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 02:59:36,142 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.3029, 4.5307, 4.7137, 5.1414, 4.9689, 4.7334, 5.3966, 1.6412], device='cuda:6'), covar=tensor([0.0733, 0.0783, 0.0855, 0.0972, 0.1301, 0.1666, 0.0583, 0.5818], device='cuda:6'), in_proj_covar=tensor([0.0358, 0.0243, 0.0276, 0.0292, 0.0336, 0.0285, 0.0308, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:00:00,457 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 03:00:10,916 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19378.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:00:13,945 INFO [finetune.py:976] (6/7) Epoch 4, batch 2200, loss[loss=0.204, simple_loss=0.2623, pruned_loss=0.07286, over 4768.00 frames. ], tot_loss[loss=0.2338, simple_loss=0.289, pruned_loss=0.08925, over 951821.89 frames. ], batch size: 27, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:00:25,721 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6733, 3.5480, 3.4178, 1.6088, 3.6807, 2.7592, 0.7755, 2.4633], device='cuda:6'), covar=tensor([0.2745, 0.1705, 0.1611, 0.3248, 0.0925, 0.0973, 0.4411, 0.1409], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0168, 0.0162, 0.0128, 0.0154, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 03:00:33,355 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:01:05,357 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.822e+02 2.092e+02 2.549e+02 4.918e+02, threshold=4.184e+02, percent-clipped=2.0 2023-03-26 03:01:19,365 INFO [finetune.py:976] (6/7) Epoch 4, batch 2250, loss[loss=0.2398, simple_loss=0.2957, pruned_loss=0.09193, over 4821.00 frames. ], tot_loss[loss=0.2361, simple_loss=0.2913, pruned_loss=0.09049, over 953428.68 frames. ], batch size: 33, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:01:46,297 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19455.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:02:09,423 INFO [finetune.py:976] (6/7) Epoch 4, batch 2300, loss[loss=0.2739, simple_loss=0.3104, pruned_loss=0.1187, over 4730.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.2907, pruned_loss=0.08956, over 954500.26 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:02:12,905 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19485.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 03:02:24,545 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3975, 1.2814, 1.5700, 2.3814, 1.6482, 2.1064, 0.8532, 1.9723], device='cuda:6'), covar=tensor([0.1863, 0.1791, 0.1294, 0.0913, 0.1050, 0.1367, 0.1788, 0.0852], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0137, 0.0166, 0.0105, 0.0144, 0.0129, 0.0106], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:02:37,773 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.717e+02 2.025e+02 2.639e+02 4.089e+02, threshold=4.050e+02, percent-clipped=0.0 2023-03-26 03:02:53,442 INFO [finetune.py:976] (6/7) Epoch 4, batch 2350, loss[loss=0.209, simple_loss=0.2675, pruned_loss=0.07528, over 4909.00 frames. ], tot_loss[loss=0.233, simple_loss=0.2884, pruned_loss=0.08878, over 955579.78 frames. ], batch size: 36, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:03:37,780 INFO [finetune.py:976] (6/7) Epoch 4, batch 2400, loss[loss=0.1701, simple_loss=0.2382, pruned_loss=0.05104, over 4759.00 frames. ], tot_loss[loss=0.2313, simple_loss=0.286, pruned_loss=0.08831, over 956456.99 frames. ], batch size: 27, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:03:40,269 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:03:40,475 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.60 vs. limit=2.0 2023-03-26 03:04:10,047 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=19618.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:04:10,430 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 03:04:10,537 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.651e+02 1.936e+02 2.390e+02 3.810e+02, threshold=3.872e+02, percent-clipped=0.0 2023-03-26 03:04:19,623 INFO [finetune.py:976] (6/7) Epoch 4, batch 2450, loss[loss=0.2469, simple_loss=0.299, pruned_loss=0.09739, over 4923.00 frames. ], tot_loss[loss=0.2279, simple_loss=0.2825, pruned_loss=0.08666, over 956663.16 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:04:20,303 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19634.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:04:30,832 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0626, 1.8770, 1.4982, 1.8651, 1.8663, 1.7600, 1.7799, 2.7413], device='cuda:6'), covar=tensor([0.9545, 1.0833, 0.7888, 1.1198, 0.8820, 0.5240, 1.1103, 0.3216], device='cuda:6'), in_proj_covar=tensor([0.0277, 0.0253, 0.0220, 0.0284, 0.0236, 0.0196, 0.0240, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 03:04:59,941 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=19679.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:05:02,075 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 03:05:02,277 INFO [finetune.py:976] (6/7) Epoch 4, batch 2500, loss[loss=0.1775, simple_loss=0.2551, pruned_loss=0.04994, over 4773.00 frames. ], tot_loss[loss=0.2314, simple_loss=0.2856, pruned_loss=0.08858, over 956420.63 frames. ], batch size: 27, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:05:28,415 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7041, 1.3297, 2.1738, 1.4718, 1.8674, 2.0389, 1.4529, 2.1493], device='cuda:6'), covar=tensor([0.1415, 0.2443, 0.1143, 0.1922, 0.0866, 0.1296, 0.2638, 0.0782], device='cuda:6'), in_proj_covar=tensor([0.0210, 0.0209, 0.0206, 0.0200, 0.0184, 0.0228, 0.0218, 0.0207], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:05:30,653 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.296e+02 1.721e+02 2.075e+02 2.470e+02 4.533e+02, threshold=4.150e+02, percent-clipped=4.0 2023-03-26 03:05:40,557 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7486, 1.5450, 1.5941, 1.0159, 1.6285, 1.8342, 1.8035, 1.5009], device='cuda:6'), covar=tensor([0.0834, 0.0599, 0.0430, 0.0540, 0.0385, 0.0470, 0.0310, 0.0574], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0156, 0.0117, 0.0135, 0.0131, 0.0120, 0.0145, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.6733e-05, 1.1582e-04, 8.4867e-05, 9.9441e-05, 9.5038e-05, 8.8692e-05, 1.0818e-04, 1.0642e-04], device='cuda:6') 2023-03-26 03:05:45,256 INFO [finetune.py:976] (6/7) Epoch 4, batch 2550, loss[loss=0.2477, simple_loss=0.3047, pruned_loss=0.09532, over 4885.00 frames. ], tot_loss[loss=0.2344, simple_loss=0.2897, pruned_loss=0.08958, over 956668.58 frames. ], batch size: 32, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:05:58,492 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19750.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:06:31,492 INFO [finetune.py:976] (6/7) Epoch 4, batch 2600, loss[loss=0.2501, simple_loss=0.3153, pruned_loss=0.09238, over 4919.00 frames. ], tot_loss[loss=0.2358, simple_loss=0.2912, pruned_loss=0.09014, over 956585.67 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:06:33,342 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=19785.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 03:06:51,706 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 03:07:15,920 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.335e+02 1.804e+02 2.222e+02 2.871e+02 4.406e+02, threshold=4.445e+02, percent-clipped=2.0 2023-03-26 03:07:34,473 INFO [finetune.py:976] (6/7) Epoch 4, batch 2650, loss[loss=0.2178, simple_loss=0.259, pruned_loss=0.08829, over 4105.00 frames. ], tot_loss[loss=0.2374, simple_loss=0.2925, pruned_loss=0.09114, over 955878.30 frames. ], batch size: 17, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:07:34,541 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=19833.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 03:07:45,041 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.96 vs. limit=5.0 2023-03-26 03:08:05,141 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7571, 1.2717, 0.8166, 1.7133, 2.0568, 1.4140, 1.5535, 1.7705], device='cuda:6'), covar=tensor([0.1460, 0.2159, 0.2277, 0.1211, 0.2049, 0.2156, 0.1362, 0.1998], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0100, 0.0118, 0.0094, 0.0126, 0.0098, 0.0101, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 03:08:13,894 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8227, 1.6883, 1.5140, 1.8622, 1.7070, 4.6838, 1.8931, 2.6035], device='cuda:6'), covar=tensor([0.4373, 0.3149, 0.2503, 0.2902, 0.1779, 0.0126, 0.2405, 0.1166], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 03:08:34,284 INFO [finetune.py:976] (6/7) Epoch 4, batch 2700, loss[loss=0.2758, simple_loss=0.3162, pruned_loss=0.1177, over 4169.00 frames. ], tot_loss[loss=0.2369, simple_loss=0.2917, pruned_loss=0.09102, over 952080.92 frames. ], batch size: 65, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:09:16,013 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.716e+02 2.005e+02 2.489e+02 3.950e+02, threshold=4.009e+02, percent-clipped=0.0 2023-03-26 03:09:17,474 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 03:09:26,958 INFO [finetune.py:976] (6/7) Epoch 4, batch 2750, loss[loss=0.244, simple_loss=0.2868, pruned_loss=0.1007, over 4052.00 frames. ], tot_loss[loss=0.2322, simple_loss=0.2871, pruned_loss=0.08861, over 952412.93 frames. ], batch size: 17, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:09:28,870 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4975, 1.3064, 1.8407, 3.0105, 2.0617, 2.3060, 1.0117, 2.4597], device='cuda:6'), covar=tensor([0.1988, 0.2079, 0.1693, 0.0821, 0.1004, 0.1841, 0.2123, 0.0860], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0120, 0.0138, 0.0167, 0.0105, 0.0144, 0.0130, 0.0106], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:09:32,857 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.97 vs. limit=5.0 2023-03-26 03:09:54,788 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=19974.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:10:00,206 INFO [finetune.py:976] (6/7) Epoch 4, batch 2800, loss[loss=0.2212, simple_loss=0.2533, pruned_loss=0.09458, over 4387.00 frames. ], tot_loss[loss=0.2288, simple_loss=0.2836, pruned_loss=0.08701, over 953860.34 frames. ], batch size: 19, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:10:24,994 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.781e+02 2.091e+02 2.518e+02 3.954e+02, threshold=4.183e+02, percent-clipped=0.0 2023-03-26 03:10:26,216 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:10:34,542 INFO [finetune.py:976] (6/7) Epoch 4, batch 2850, loss[loss=0.2727, simple_loss=0.3174, pruned_loss=0.114, over 4745.00 frames. ], tot_loss[loss=0.2267, simple_loss=0.2817, pruned_loss=0.08583, over 954559.47 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:10:45,845 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20050.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:21,584 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20081.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:22,667 INFO [finetune.py:976] (6/7) Epoch 4, batch 2900, loss[loss=0.2821, simple_loss=0.3343, pruned_loss=0.1149, over 4863.00 frames. ], tot_loss[loss=0.2302, simple_loss=0.2853, pruned_loss=0.08749, over 953807.95 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:11:31,851 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20089.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:43,208 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:11:56,173 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20112.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:12:05,855 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.342e+02 1.862e+02 2.201e+02 2.748e+02 4.534e+02, threshold=4.402e+02, percent-clipped=1.0 2023-03-26 03:12:13,047 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 03:12:27,410 INFO [finetune.py:976] (6/7) Epoch 4, batch 2950, loss[loss=0.2623, simple_loss=0.3292, pruned_loss=0.09764, over 4917.00 frames. ], tot_loss[loss=0.2344, simple_loss=0.2899, pruned_loss=0.08947, over 952460.25 frames. ], batch size: 42, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:12:48,261 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:13:10,784 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20173.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:13:18,822 INFO [finetune.py:976] (6/7) Epoch 4, batch 3000, loss[loss=0.2223, simple_loss=0.2867, pruned_loss=0.07897, over 4924.00 frames. ], tot_loss[loss=0.2347, simple_loss=0.2903, pruned_loss=0.08957, over 953589.39 frames. ], batch size: 33, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:13:18,822 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 03:13:30,949 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7180, 1.6097, 1.6433, 1.6618, 1.0809, 3.0803, 1.2206, 1.8771], device='cuda:6'), covar=tensor([0.3322, 0.2240, 0.1910, 0.2372, 0.2024, 0.0248, 0.2578, 0.1264], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0120, 0.0117, 0.0097, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 03:13:34,007 INFO [finetune.py:1010] (6/7) Epoch 4, validation: loss=0.169, simple_loss=0.2409, pruned_loss=0.04857, over 2265189.00 frames. 2023-03-26 03:13:34,007 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 03:13:58,680 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:14:18,151 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.917e+02 2.134e+02 2.804e+02 4.274e+02, threshold=4.268e+02, percent-clipped=0.0 2023-03-26 03:14:27,244 INFO [finetune.py:976] (6/7) Epoch 4, batch 3050, loss[loss=0.2018, simple_loss=0.2607, pruned_loss=0.07148, over 4767.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.2915, pruned_loss=0.08921, over 955151.79 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:14:38,115 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5795, 1.4799, 1.6054, 1.8892, 1.5559, 3.3501, 1.2728, 1.6148], device='cuda:6'), covar=tensor([0.0993, 0.1926, 0.1328, 0.1013, 0.1674, 0.0257, 0.1657, 0.1851], device='cuda:6'), in_proj_covar=tensor([0.0079, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 03:14:47,520 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.62 vs. limit=5.0 2023-03-26 03:15:06,986 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:15:16,651 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20274.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:15:24,345 INFO [finetune.py:976] (6/7) Epoch 4, batch 3100, loss[loss=0.204, simple_loss=0.2649, pruned_loss=0.0715, over 4814.00 frames. ], tot_loss[loss=0.2338, simple_loss=0.2905, pruned_loss=0.08854, over 957103.29 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:16:01,241 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.549e+02 1.969e+02 2.570e+02 5.632e+02, threshold=3.937e+02, percent-clipped=1.0 2023-03-26 03:16:03,113 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20322.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:16:14,159 INFO [finetune.py:976] (6/7) Epoch 4, batch 3150, loss[loss=0.2095, simple_loss=0.2651, pruned_loss=0.07697, over 4896.00 frames. ], tot_loss[loss=0.2301, simple_loss=0.286, pruned_loss=0.08707, over 956946.01 frames. ], batch size: 43, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:16:56,937 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20376.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:17:01,161 INFO [finetune.py:976] (6/7) Epoch 4, batch 3200, loss[loss=0.1865, simple_loss=0.2351, pruned_loss=0.06894, over 4727.00 frames. ], tot_loss[loss=0.2274, simple_loss=0.2828, pruned_loss=0.08596, over 956575.56 frames. ], batch size: 23, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:17:04,840 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:17:07,187 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5366, 2.5701, 2.4894, 1.8753, 2.7799, 2.8657, 2.6549, 2.3442], device='cuda:6'), covar=tensor([0.0527, 0.0488, 0.0612, 0.0822, 0.0478, 0.0519, 0.0495, 0.0774], device='cuda:6'), in_proj_covar=tensor([0.0140, 0.0134, 0.0147, 0.0130, 0.0112, 0.0146, 0.0149, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:17:40,203 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20418.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:17:40,661 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.625e+02 1.959e+02 2.342e+02 5.079e+02, threshold=3.919e+02, percent-clipped=3.0 2023-03-26 03:17:58,485 INFO [finetune.py:976] (6/7) Epoch 4, batch 3250, loss[loss=0.2157, simple_loss=0.2859, pruned_loss=0.07278, over 4906.00 frames. ], tot_loss[loss=0.2287, simple_loss=0.2835, pruned_loss=0.08691, over 954457.20 frames. ], batch size: 37, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:18:06,352 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20445.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:08,883 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20449.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:21,893 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20468.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:29,609 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:18:31,929 INFO [finetune.py:976] (6/7) Epoch 4, batch 3300, loss[loss=0.3458, simple_loss=0.3863, pruned_loss=0.1526, over 4926.00 frames. ], tot_loss[loss=0.2322, simple_loss=0.2877, pruned_loss=0.08834, over 954536.55 frames. ], batch size: 42, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:18:38,624 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6331, 1.3635, 1.6732, 1.8668, 1.5154, 3.4149, 1.2722, 1.5771], device='cuda:6'), covar=tensor([0.1093, 0.2174, 0.1567, 0.1214, 0.1902, 0.0272, 0.1901, 0.2144], device='cuda:6'), in_proj_covar=tensor([0.0079, 0.0082, 0.0078, 0.0079, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 03:19:01,047 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0820, 1.6397, 2.3444, 1.5685, 2.0953, 2.3472, 1.7055, 2.4483], device='cuda:6'), covar=tensor([0.1747, 0.2707, 0.1627, 0.2568, 0.1253, 0.1873, 0.2999, 0.1077], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0207, 0.0203, 0.0197, 0.0182, 0.0224, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:19:09,458 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.750e+02 2.039e+02 2.534e+02 4.074e+02, threshold=4.078e+02, percent-clipped=2.0 2023-03-26 03:19:29,492 INFO [finetune.py:976] (6/7) Epoch 4, batch 3350, loss[loss=0.2454, simple_loss=0.3017, pruned_loss=0.09453, over 4839.00 frames. ], tot_loss[loss=0.2349, simple_loss=0.2914, pruned_loss=0.08916, over 957199.07 frames. ], batch size: 47, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:19:59,042 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20560.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:20:17,267 INFO [finetune.py:976] (6/7) Epoch 4, batch 3400, loss[loss=0.2217, simple_loss=0.2847, pruned_loss=0.07942, over 4747.00 frames. ], tot_loss[loss=0.2348, simple_loss=0.291, pruned_loss=0.08926, over 955816.32 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:20:20,440 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=20588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:20:46,762 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.735e+02 2.048e+02 2.538e+02 3.974e+02, threshold=4.096e+02, percent-clipped=0.0 2023-03-26 03:20:47,586 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 03:21:05,099 INFO [finetune.py:976] (6/7) Epoch 4, batch 3450, loss[loss=0.1717, simple_loss=0.2229, pruned_loss=0.06024, over 4029.00 frames. ], tot_loss[loss=0.2343, simple_loss=0.2905, pruned_loss=0.08904, over 956651.07 frames. ], batch size: 17, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:21:22,501 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=20649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:21:29,624 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6899, 1.4585, 1.9521, 3.1742, 2.2472, 2.2681, 0.8618, 2.4883], device='cuda:6'), covar=tensor([0.1786, 0.1613, 0.1432, 0.0653, 0.0849, 0.1256, 0.2092, 0.0663], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0120, 0.0138, 0.0168, 0.0105, 0.0144, 0.0131, 0.0106], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:21:45,694 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2329, 1.4210, 1.3676, 1.4756, 1.5791, 2.9888, 1.3215, 1.6074], device='cuda:6'), covar=tensor([0.1079, 0.1684, 0.1125, 0.1015, 0.1464, 0.0279, 0.1422, 0.1580], device='cuda:6'), in_proj_covar=tensor([0.0079, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 03:21:45,700 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20676.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:21:51,858 INFO [finetune.py:976] (6/7) Epoch 4, batch 3500, loss[loss=0.2376, simple_loss=0.2819, pruned_loss=0.09664, over 4857.00 frames. ], tot_loss[loss=0.2329, simple_loss=0.2882, pruned_loss=0.08884, over 954968.72 frames. ], batch size: 44, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:22:31,509 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.236e+02 1.688e+02 2.022e+02 2.523e+02 5.341e+02, threshold=4.043e+02, percent-clipped=2.0 2023-03-26 03:22:35,130 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20724.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:22:43,659 INFO [finetune.py:976] (6/7) Epoch 4, batch 3550, loss[loss=0.2363, simple_loss=0.2925, pruned_loss=0.0901, over 4816.00 frames. ], tot_loss[loss=0.2313, simple_loss=0.2859, pruned_loss=0.08829, over 956030.58 frames. ], batch size: 39, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:22:56,131 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:22:56,772 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20745.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:24,140 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20768.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:33,931 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:41,429 INFO [finetune.py:976] (6/7) Epoch 4, batch 3600, loss[loss=0.2741, simple_loss=0.321, pruned_loss=0.1136, over 4911.00 frames. ], tot_loss[loss=0.2294, simple_loss=0.2838, pruned_loss=0.08753, over 956706.76 frames. ], batch size: 43, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:23:58,256 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20793.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:23:58,293 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1962, 1.2645, 0.6639, 1.8672, 2.3266, 1.6192, 1.6521, 1.9701], device='cuda:6'), covar=tensor([0.1470, 0.2211, 0.2518, 0.1249, 0.1993, 0.2309, 0.1405, 0.2024], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0100, 0.0119, 0.0095, 0.0126, 0.0098, 0.0102, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:24:24,363 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20816.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:24:31,495 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.828e+01 1.831e+02 2.152e+02 2.506e+02 5.159e+02, threshold=4.304e+02, percent-clipped=1.0 2023-03-26 03:24:34,433 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4284, 1.3348, 1.3069, 1.5785, 1.4124, 2.8635, 1.1949, 1.4941], device='cuda:6'), covar=tensor([0.0933, 0.1643, 0.1419, 0.0955, 0.1519, 0.0305, 0.1491, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0081, 0.0078, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 03:24:50,035 INFO [finetune.py:976] (6/7) Epoch 4, batch 3650, loss[loss=0.3127, simple_loss=0.3583, pruned_loss=0.1336, over 4720.00 frames. ], tot_loss[loss=0.2337, simple_loss=0.2876, pruned_loss=0.08994, over 955688.53 frames. ], batch size: 59, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:25:24,817 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=20860.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:25:35,150 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9353, 1.7720, 1.4323, 1.6852, 1.6881, 1.6615, 1.5938, 2.4669], device='cuda:6'), covar=tensor([0.8792, 0.9726, 0.7314, 1.0113, 0.7813, 0.4891, 0.9282, 0.2999], device='cuda:6'), in_proj_covar=tensor([0.0279, 0.0254, 0.0220, 0.0284, 0.0237, 0.0197, 0.0241, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 03:25:52,756 INFO [finetune.py:976] (6/7) Epoch 4, batch 3700, loss[loss=0.2173, simple_loss=0.2847, pruned_loss=0.07494, over 4834.00 frames. ], tot_loss[loss=0.2333, simple_loss=0.2887, pruned_loss=0.08894, over 955624.64 frames. ], batch size: 47, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:25:55,606 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.33 vs. limit=5.0 2023-03-26 03:26:17,163 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=20908.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:26:17,224 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6675, 2.1370, 1.8026, 0.8465, 2.1382, 1.9424, 1.5170, 1.9176], device='cuda:6'), covar=tensor([0.0727, 0.1459, 0.2125, 0.2987, 0.2033, 0.2449, 0.2697, 0.1528], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0203, 0.0190, 0.0217, 0.0210, 0.0219, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:26:24,268 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.710e+02 2.022e+02 2.626e+02 5.956e+02, threshold=4.044e+02, percent-clipped=2.0 2023-03-26 03:26:34,612 INFO [finetune.py:976] (6/7) Epoch 4, batch 3750, loss[loss=0.2932, simple_loss=0.3332, pruned_loss=0.1266, over 4868.00 frames. ], tot_loss[loss=0.234, simple_loss=0.2894, pruned_loss=0.08926, over 954058.39 frames. ], batch size: 34, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:26:46,570 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=20944.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:26:56,526 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 03:27:16,044 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8584, 3.3758, 3.4826, 3.7429, 3.5670, 3.3617, 3.9283, 1.2339], device='cuda:6'), covar=tensor([0.0889, 0.0812, 0.0938, 0.0961, 0.1452, 0.1578, 0.0826, 0.5083], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0241, 0.0275, 0.0292, 0.0335, 0.0284, 0.0306, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:27:33,089 INFO [finetune.py:976] (6/7) Epoch 4, batch 3800, loss[loss=0.2271, simple_loss=0.2945, pruned_loss=0.07983, over 4777.00 frames. ], tot_loss[loss=0.2338, simple_loss=0.2902, pruned_loss=0.08873, over 954152.87 frames. ], batch size: 26, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:28:14,228 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.257e+02 1.718e+02 1.983e+02 2.372e+02 3.493e+02, threshold=3.966e+02, percent-clipped=0.0 2023-03-26 03:28:29,879 INFO [finetune.py:976] (6/7) Epoch 4, batch 3850, loss[loss=0.2255, simple_loss=0.274, pruned_loss=0.08853, over 4821.00 frames. ], tot_loss[loss=0.2324, simple_loss=0.2886, pruned_loss=0.08808, over 953496.38 frames. ], batch size: 38, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:28:37,705 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21044.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:28:45,321 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:01,069 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8174, 1.7224, 2.2044, 3.4338, 2.4730, 2.4687, 1.0712, 2.7225], device='cuda:6'), covar=tensor([0.1701, 0.1488, 0.1291, 0.0594, 0.0823, 0.1462, 0.1880, 0.0600], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0137, 0.0167, 0.0104, 0.0144, 0.0130, 0.0105], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:29:08,452 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21074.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:19,706 INFO [finetune.py:976] (6/7) Epoch 4, batch 3900, loss[loss=0.2162, simple_loss=0.2655, pruned_loss=0.08345, over 4745.00 frames. ], tot_loss[loss=0.2312, simple_loss=0.286, pruned_loss=0.08816, over 953978.17 frames. ], batch size: 54, lr: 3.96e-03, grad_scale: 64.0 2023-03-26 03:29:26,813 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:39,744 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.3123, 4.5281, 4.7281, 5.1257, 5.0099, 4.6819, 5.3843, 1.7271], device='cuda:6'), covar=tensor([0.0649, 0.0889, 0.0749, 0.0617, 0.1086, 0.1321, 0.0527, 0.5197], device='cuda:6'), in_proj_covar=tensor([0.0355, 0.0241, 0.0273, 0.0290, 0.0334, 0.0283, 0.0304, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:29:41,629 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8975, 1.0807, 1.6528, 1.5722, 1.5107, 1.4827, 1.4809, 1.5000], device='cuda:6'), covar=tensor([0.6106, 0.9461, 0.7473, 0.8645, 0.9425, 0.6833, 1.0414, 0.6862], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0251, 0.0255, 0.0261, 0.0241, 0.0218, 0.0276, 0.0221], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 03:29:43,699 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:46,771 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:47,847 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.709e+02 1.984e+02 2.370e+02 6.134e+02, threshold=3.968e+02, percent-clipped=2.0 2023-03-26 03:29:49,149 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:29:59,343 INFO [finetune.py:976] (6/7) Epoch 4, batch 3950, loss[loss=0.2611, simple_loss=0.3098, pruned_loss=0.1063, over 4158.00 frames. ], tot_loss[loss=0.2271, simple_loss=0.2822, pruned_loss=0.08594, over 953771.38 frames. ], batch size: 65, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:30:24,993 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 03:30:47,956 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21179.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:30:50,289 INFO [finetune.py:976] (6/7) Epoch 4, batch 4000, loss[loss=0.2127, simple_loss=0.2722, pruned_loss=0.07657, over 4906.00 frames. ], tot_loss[loss=0.2257, simple_loss=0.2808, pruned_loss=0.08528, over 955406.03 frames. ], batch size: 35, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:30:59,313 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 03:31:19,053 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.740e+02 2.148e+02 2.427e+02 4.357e+02, threshold=4.296e+02, percent-clipped=1.0 2023-03-26 03:31:27,483 INFO [finetune.py:976] (6/7) Epoch 4, batch 4050, loss[loss=0.2288, simple_loss=0.2791, pruned_loss=0.08919, over 4836.00 frames. ], tot_loss[loss=0.2299, simple_loss=0.2848, pruned_loss=0.08745, over 952920.73 frames. ], batch size: 30, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:31:27,999 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-26 03:31:35,255 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21244.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:31:48,483 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7073, 1.6292, 1.3580, 1.4431, 1.9044, 1.9318, 1.7280, 1.4639], device='cuda:6'), covar=tensor([0.0266, 0.0337, 0.0482, 0.0369, 0.0210, 0.0416, 0.0293, 0.0405], device='cuda:6'), in_proj_covar=tensor([0.0086, 0.0115, 0.0139, 0.0119, 0.0106, 0.0100, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.7617e-05, 9.0381e-05, 1.1166e-04, 9.3953e-05, 8.4112e-05, 7.4407e-05, 7.0512e-05, 8.6102e-05], device='cuda:6') 2023-03-26 03:32:12,118 INFO [finetune.py:976] (6/7) Epoch 4, batch 4100, loss[loss=0.2413, simple_loss=0.2991, pruned_loss=0.09175, over 4207.00 frames. ], tot_loss[loss=0.2327, simple_loss=0.288, pruned_loss=0.08869, over 952380.09 frames. ], batch size: 66, lr: 3.96e-03, grad_scale: 32.0 2023-03-26 03:32:23,703 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21292.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:32:44,708 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21312.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:32:53,056 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2606, 2.8674, 2.7184, 1.1074, 2.9874, 2.1243, 0.6638, 1.8085], device='cuda:6'), covar=tensor([0.2600, 0.2054, 0.1813, 0.3483, 0.1432, 0.1118, 0.4210, 0.1690], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0170, 0.0163, 0.0128, 0.0155, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 03:32:53,585 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.825e+02 2.074e+02 2.571e+02 5.101e+02, threshold=4.147e+02, percent-clipped=2.0 2023-03-26 03:33:03,609 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0316, 1.8777, 1.6336, 1.9203, 1.8352, 1.7856, 1.7360, 2.6102], device='cuda:6'), covar=tensor([0.7481, 0.9265, 0.6394, 0.8795, 0.7842, 0.4480, 0.9355, 0.2598], device='cuda:6'), in_proj_covar=tensor([0.0279, 0.0254, 0.0220, 0.0284, 0.0237, 0.0197, 0.0242, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001], device='cuda:6') 2023-03-26 03:33:04,207 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21328.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:33:07,033 INFO [finetune.py:976] (6/7) Epoch 4, batch 4150, loss[loss=0.2024, simple_loss=0.2732, pruned_loss=0.06579, over 4822.00 frames. ], tot_loss[loss=0.2331, simple_loss=0.2886, pruned_loss=0.08879, over 951496.04 frames. ], batch size: 30, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:33:35,293 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 03:33:45,543 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21373.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:33:52,601 INFO [finetune.py:976] (6/7) Epoch 4, batch 4200, loss[loss=0.2436, simple_loss=0.3043, pruned_loss=0.09141, over 4791.00 frames. ], tot_loss[loss=0.2328, simple_loss=0.2886, pruned_loss=0.08843, over 951642.89 frames. ], batch size: 51, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:34:02,031 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21389.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:34:03,781 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0959, 2.0585, 1.9920, 1.4513, 2.2074, 2.2947, 2.1021, 1.8369], device='cuda:6'), covar=tensor([0.0657, 0.0580, 0.0792, 0.0992, 0.0534, 0.0653, 0.0670, 0.1009], device='cuda:6'), in_proj_covar=tensor([0.0140, 0.0134, 0.0146, 0.0129, 0.0112, 0.0145, 0.0149, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:34:21,946 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:34:39,317 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.725e+02 2.024e+02 2.533e+02 3.913e+02, threshold=4.049e+02, percent-clipped=0.0 2023-03-26 03:34:50,756 INFO [finetune.py:976] (6/7) Epoch 4, batch 4250, loss[loss=0.1991, simple_loss=0.2471, pruned_loss=0.07556, over 4714.00 frames. ], tot_loss[loss=0.2293, simple_loss=0.285, pruned_loss=0.08681, over 952628.48 frames. ], batch size: 59, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:35:18,415 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21474.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:35:24,282 INFO [finetune.py:976] (6/7) Epoch 4, batch 4300, loss[loss=0.182, simple_loss=0.2361, pruned_loss=0.06395, over 4939.00 frames. ], tot_loss[loss=0.2271, simple_loss=0.2826, pruned_loss=0.08583, over 954226.07 frames. ], batch size: 38, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:35:53,224 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.244e+02 1.671e+02 2.065e+02 2.560e+02 4.445e+02, threshold=4.130e+02, percent-clipped=1.0 2023-03-26 03:35:53,366 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2534, 1.9737, 1.3977, 0.6124, 1.6355, 1.9296, 1.7508, 1.8360], device='cuda:6'), covar=tensor([0.1183, 0.0904, 0.1629, 0.2311, 0.1546, 0.2365, 0.2310, 0.0931], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0202, 0.0203, 0.0191, 0.0219, 0.0211, 0.0220, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:36:01,105 INFO [finetune.py:976] (6/7) Epoch 4, batch 4350, loss[loss=0.2458, simple_loss=0.292, pruned_loss=0.09979, over 4910.00 frames. ], tot_loss[loss=0.2235, simple_loss=0.2787, pruned_loss=0.08417, over 952656.34 frames. ], batch size: 37, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:36:40,601 INFO [finetune.py:976] (6/7) Epoch 4, batch 4400, loss[loss=0.2708, simple_loss=0.3302, pruned_loss=0.1057, over 4855.00 frames. ], tot_loss[loss=0.225, simple_loss=0.2806, pruned_loss=0.08477, over 953592.30 frames. ], batch size: 44, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:36:48,995 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:37:05,923 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:37:15,012 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.721e+02 2.036e+02 2.627e+02 4.967e+02, threshold=4.072e+02, percent-clipped=2.0 2023-03-26 03:37:22,900 INFO [finetune.py:976] (6/7) Epoch 4, batch 4450, loss[loss=0.2021, simple_loss=0.2576, pruned_loss=0.07334, over 4093.00 frames. ], tot_loss[loss=0.229, simple_loss=0.2852, pruned_loss=0.08641, over 951428.55 frames. ], batch size: 18, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:37:31,886 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4478, 1.1485, 1.2480, 1.2339, 1.6127, 1.5835, 1.4341, 1.2324], device='cuda:6'), covar=tensor([0.0286, 0.0365, 0.0595, 0.0336, 0.0239, 0.0413, 0.0268, 0.0379], device='cuda:6'), in_proj_covar=tensor([0.0086, 0.0115, 0.0139, 0.0119, 0.0106, 0.0100, 0.0092, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.7501e-05, 9.0393e-05, 1.1170e-04, 9.3898e-05, 8.3908e-05, 7.4596e-05, 7.0419e-05, 8.5602e-05], device='cuda:6') 2023-03-26 03:37:38,563 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21656.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:37:51,474 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21668.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:01,193 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21674.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:12,007 INFO [finetune.py:976] (6/7) Epoch 4, batch 4500, loss[loss=0.1916, simple_loss=0.256, pruned_loss=0.06364, over 4780.00 frames. ], tot_loss[loss=0.2293, simple_loss=0.286, pruned_loss=0.08628, over 952616.29 frames. ], batch size: 29, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:38:12,681 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21684.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:41,407 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:38:49,075 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.156e+02 1.721e+02 2.032e+02 2.543e+02 5.339e+02, threshold=4.063e+02, percent-clipped=3.0 2023-03-26 03:38:58,599 INFO [finetune.py:976] (6/7) Epoch 4, batch 4550, loss[loss=0.2791, simple_loss=0.3356, pruned_loss=0.1113, over 4732.00 frames. ], tot_loss[loss=0.231, simple_loss=0.2876, pruned_loss=0.08723, over 952429.43 frames. ], batch size: 54, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:39:01,854 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 03:39:14,120 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:39:18,472 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:39:30,509 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:39:40,881 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7812, 1.2666, 1.6502, 1.6311, 1.4387, 1.4554, 1.5474, 1.5353], device='cuda:6'), covar=tensor([0.7192, 1.0245, 0.8040, 0.9180, 0.9889, 0.7308, 1.1642, 0.7556], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0252, 0.0257, 0.0262, 0.0242, 0.0218, 0.0278, 0.0222], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:39:41,970 INFO [finetune.py:976] (6/7) Epoch 4, batch 4600, loss[loss=0.2355, simple_loss=0.2858, pruned_loss=0.09254, over 4889.00 frames. ], tot_loss[loss=0.2308, simple_loss=0.2876, pruned_loss=0.08698, over 953529.84 frames. ], batch size: 35, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:40:25,681 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.219e+02 1.789e+02 2.167e+02 2.687e+02 4.147e+02, threshold=4.334e+02, percent-clipped=1.0 2023-03-26 03:40:32,330 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=21822.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:40:33,592 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21824.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:40:44,876 INFO [finetune.py:976] (6/7) Epoch 4, batch 4650, loss[loss=0.2275, simple_loss=0.2706, pruned_loss=0.09216, over 4936.00 frames. ], tot_loss[loss=0.2273, simple_loss=0.2838, pruned_loss=0.08543, over 952331.03 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:41:06,210 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21850.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:41:28,343 INFO [finetune.py:976] (6/7) Epoch 4, batch 4700, loss[loss=0.2235, simple_loss=0.2759, pruned_loss=0.08548, over 4926.00 frames. ], tot_loss[loss=0.2254, simple_loss=0.2814, pruned_loss=0.08474, over 953383.61 frames. ], batch size: 37, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:41:29,860 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 03:41:44,576 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5739, 1.4669, 1.3424, 1.6010, 1.6590, 1.5990, 0.9934, 1.3710], device='cuda:6'), covar=tensor([0.2173, 0.2048, 0.1923, 0.1791, 0.1695, 0.1218, 0.2647, 0.1838], device='cuda:6'), in_proj_covar=tensor([0.0231, 0.0208, 0.0198, 0.0184, 0.0235, 0.0174, 0.0214, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:42:02,066 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=21911.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:08,439 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.749e+02 2.081e+02 2.546e+02 7.973e+02, threshold=4.162e+02, percent-clipped=1.0 2023-03-26 03:42:15,833 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9301, 1.7196, 2.3920, 1.5933, 2.2224, 2.3322, 1.6836, 2.4680], device='cuda:6'), covar=tensor([0.1508, 0.2148, 0.1435, 0.2092, 0.0876, 0.1468, 0.2701, 0.0841], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0205, 0.0202, 0.0195, 0.0183, 0.0223, 0.0214, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:42:16,914 INFO [finetune.py:976] (6/7) Epoch 4, batch 4750, loss[loss=0.2428, simple_loss=0.298, pruned_loss=0.09386, over 4922.00 frames. ], tot_loss[loss=0.2236, simple_loss=0.2793, pruned_loss=0.08393, over 953729.39 frames. ], batch size: 38, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:42:29,509 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:43,184 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7992, 1.7417, 1.7376, 1.7475, 1.2992, 3.3845, 1.5081, 2.0557], device='cuda:6'), covar=tensor([0.3177, 0.2070, 0.1905, 0.2262, 0.1852, 0.0210, 0.2419, 0.1118], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0100, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 03:42:46,681 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21968.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:53,027 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=21969.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:53,732 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=21970.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:42:57,628 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.54 vs. limit=5.0 2023-03-26 03:43:02,052 INFO [finetune.py:976] (6/7) Epoch 4, batch 4800, loss[loss=0.2394, simple_loss=0.303, pruned_loss=0.08784, over 4831.00 frames. ], tot_loss[loss=0.2265, simple_loss=0.2825, pruned_loss=0.08524, over 953818.76 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:43:02,775 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=21984.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:29,804 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7605, 1.1444, 1.6400, 1.5150, 1.3728, 1.3971, 1.4297, 1.5176], device='cuda:6'), covar=tensor([0.5951, 0.8332, 0.6905, 0.7875, 0.8741, 0.6738, 1.0204, 0.6570], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0252, 0.0257, 0.0262, 0.0242, 0.0218, 0.0278, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:43:34,457 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:37,302 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.769e+02 1.987e+02 2.631e+02 5.032e+02, threshold=3.974e+02, percent-clipped=2.0 2023-03-26 03:43:44,133 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22031.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:44,661 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22032.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:43:45,185 INFO [finetune.py:976] (6/7) Epoch 4, batch 4850, loss[loss=0.2755, simple_loss=0.3235, pruned_loss=0.1137, over 4781.00 frames. ], tot_loss[loss=0.229, simple_loss=0.2858, pruned_loss=0.08604, over 953697.86 frames. ], batch size: 29, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:44:19,577 INFO [finetune.py:976] (6/7) Epoch 4, batch 4900, loss[loss=0.2832, simple_loss=0.3316, pruned_loss=0.1174, over 4834.00 frames. ], tot_loss[loss=0.2305, simple_loss=0.2877, pruned_loss=0.08666, over 955618.05 frames. ], batch size: 49, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:45:00,547 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22119.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:45:01,042 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.764e+02 2.232e+02 2.515e+02 4.523e+02, threshold=4.464e+02, percent-clipped=3.0 2023-03-26 03:45:18,652 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:45:20,457 INFO [finetune.py:976] (6/7) Epoch 4, batch 4950, loss[loss=0.2876, simple_loss=0.3448, pruned_loss=0.1152, over 4799.00 frames. ], tot_loss[loss=0.2331, simple_loss=0.29, pruned_loss=0.08809, over 955146.27 frames. ], batch size: 45, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:45:41,958 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 03:46:12,774 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:19,507 INFO [finetune.py:976] (6/7) Epoch 4, batch 5000, loss[loss=0.2529, simple_loss=0.3017, pruned_loss=0.102, over 4711.00 frames. ], tot_loss[loss=0.2288, simple_loss=0.2867, pruned_loss=0.08552, over 957496.20 frames. ], batch size: 59, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:46:24,503 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22191.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:27,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1735, 1.9486, 1.6417, 2.1388, 1.9280, 1.8070, 1.8238, 2.8359], device='cuda:6'), covar=tensor([0.8031, 0.9893, 0.6740, 0.8885, 0.7774, 0.4864, 0.9814, 0.2924], device='cuda:6'), in_proj_covar=tensor([0.0279, 0.0254, 0.0219, 0.0284, 0.0237, 0.0198, 0.0242, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:46:35,050 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:40,527 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22215.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:46:43,477 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.598e+02 2.028e+02 2.482e+02 4.524e+02, threshold=4.056e+02, percent-clipped=1.0 2023-03-26 03:46:58,027 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8609, 2.0132, 1.7740, 1.1862, 2.1271, 2.0815, 1.9615, 1.7339], device='cuda:6'), covar=tensor([0.0721, 0.0549, 0.0850, 0.1017, 0.0512, 0.0663, 0.0696, 0.1075], device='cuda:6'), in_proj_covar=tensor([0.0138, 0.0133, 0.0145, 0.0128, 0.0110, 0.0144, 0.0148, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:46:59,735 INFO [finetune.py:976] (6/7) Epoch 4, batch 5050, loss[loss=0.1981, simple_loss=0.2675, pruned_loss=0.06435, over 4791.00 frames. ], tot_loss[loss=0.2272, simple_loss=0.2842, pruned_loss=0.08506, over 953632.65 frames. ], batch size: 29, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:47:01,665 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5961, 1.5646, 2.0613, 3.3029, 2.2910, 2.2416, 0.9870, 2.5501], device='cuda:6'), covar=tensor([0.1866, 0.1672, 0.1384, 0.0613, 0.0866, 0.1392, 0.1994, 0.0642], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0119, 0.0137, 0.0167, 0.0104, 0.0144, 0.0130, 0.0105], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:47:02,263 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22237.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:02,276 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22237.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:16,634 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:28,164 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22269.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:32,929 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22276.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:40,350 INFO [finetune.py:976] (6/7) Epoch 4, batch 5100, loss[loss=0.2319, simple_loss=0.2788, pruned_loss=0.09256, over 4916.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.2808, pruned_loss=0.08398, over 951700.95 frames. ], batch size: 37, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:47:40,547 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 03:47:49,917 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22298.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:50,456 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:47:51,114 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0703, 1.7613, 2.4062, 1.5824, 2.2012, 2.4507, 1.8435, 2.4079], device='cuda:6'), covar=tensor([0.1752, 0.2451, 0.1512, 0.2244, 0.1079, 0.1447, 0.2789, 0.0946], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0206, 0.0202, 0.0197, 0.0183, 0.0224, 0.0215, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:48:03,280 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22317.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:48:05,003 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.872e+01 1.620e+02 1.853e+02 2.165e+02 3.345e+02, threshold=3.706e+02, percent-clipped=0.0 2023-03-26 03:48:08,733 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:48:13,376 INFO [finetune.py:976] (6/7) Epoch 4, batch 5150, loss[loss=0.2115, simple_loss=0.2766, pruned_loss=0.07321, over 4739.00 frames. ], tot_loss[loss=0.2235, simple_loss=0.2802, pruned_loss=0.08344, over 953895.54 frames. ], batch size: 27, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:48:45,301 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-26 03:48:51,909 INFO [finetune.py:976] (6/7) Epoch 4, batch 5200, loss[loss=0.1775, simple_loss=0.2458, pruned_loss=0.05454, over 4711.00 frames. ], tot_loss[loss=0.2293, simple_loss=0.2857, pruned_loss=0.08646, over 954026.01 frames. ], batch size: 23, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:48:53,240 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:49:26,428 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22419.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:49:26,883 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.279e+02 1.811e+02 2.155e+02 2.737e+02 4.498e+02, threshold=4.310e+02, percent-clipped=4.0 2023-03-26 03:49:29,539 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 03:49:40,616 INFO [finetune.py:976] (6/7) Epoch 4, batch 5250, loss[loss=0.2295, simple_loss=0.2866, pruned_loss=0.08617, over 4731.00 frames. ], tot_loss[loss=0.231, simple_loss=0.2882, pruned_loss=0.08687, over 955890.24 frames. ], batch size: 54, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:49:54,893 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22446.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:50:18,145 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22467.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:50:27,829 INFO [finetune.py:976] (6/7) Epoch 4, batch 5300, loss[loss=0.276, simple_loss=0.3239, pruned_loss=0.1141, over 4886.00 frames. ], tot_loss[loss=0.2307, simple_loss=0.288, pruned_loss=0.08664, over 955274.86 frames. ], batch size: 32, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:50:30,484 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:50:44,795 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6708, 1.7536, 1.4158, 2.6767, 3.0149, 2.3853, 2.3871, 2.6755], device='cuda:6'), covar=tensor([0.1310, 0.1933, 0.2023, 0.0991, 0.1586, 0.1559, 0.1285, 0.1697], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0116, 0.0092, 0.0124, 0.0096, 0.0100, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 03:50:54,002 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:50:55,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9970, 1.5972, 2.4125, 1.5595, 2.1806, 2.3400, 1.6905, 2.3299], device='cuda:6'), covar=tensor([0.1644, 0.2328, 0.1416, 0.2203, 0.1040, 0.1540, 0.2907, 0.1138], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0206, 0.0204, 0.0197, 0.0184, 0.0226, 0.0216, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:50:56,466 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 03:51:10,354 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.196e+02 1.644e+02 2.088e+02 2.525e+02 4.526e+02, threshold=4.176e+02, percent-clipped=1.0 2023-03-26 03:51:29,037 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22532.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:51:29,575 INFO [finetune.py:976] (6/7) Epoch 4, batch 5350, loss[loss=0.1799, simple_loss=0.2416, pruned_loss=0.0591, over 4894.00 frames. ], tot_loss[loss=0.2287, simple_loss=0.2867, pruned_loss=0.08538, over 954971.84 frames. ], batch size: 32, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:51:40,396 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3662, 1.3054, 1.5928, 2.4530, 1.7690, 2.1918, 0.8680, 2.0261], device='cuda:6'), covar=tensor([0.1920, 0.1641, 0.1219, 0.0719, 0.0906, 0.1137, 0.1641, 0.0806], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0118, 0.0135, 0.0166, 0.0103, 0.0142, 0.0128, 0.0104], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 03:51:43,432 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22554.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:10,735 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22571.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:20,640 INFO [finetune.py:976] (6/7) Epoch 4, batch 5400, loss[loss=0.1801, simple_loss=0.2469, pruned_loss=0.05663, over 4749.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.2833, pruned_loss=0.08383, over 956005.83 frames. ], batch size: 28, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:52:22,175 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.21 vs. limit=5.0 2023-03-26 03:52:27,237 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:27,907 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3725, 2.1791, 1.6957, 0.8261, 1.9809, 1.9276, 1.7423, 1.9853], device='cuda:6'), covar=tensor([0.0887, 0.0836, 0.1534, 0.2136, 0.1224, 0.2082, 0.2108, 0.0858], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0200, 0.0204, 0.0191, 0.0218, 0.0210, 0.0220, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:52:45,273 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.598e+02 1.961e+02 2.261e+02 4.832e+02, threshold=3.922e+02, percent-clipped=2.0 2023-03-26 03:52:50,434 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22626.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:52:54,548 INFO [finetune.py:976] (6/7) Epoch 4, batch 5450, loss[loss=0.2102, simple_loss=0.2636, pruned_loss=0.07844, over 4897.00 frames. ], tot_loss[loss=0.2227, simple_loss=0.28, pruned_loss=0.08274, over 956864.71 frames. ], batch size: 35, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:52:58,314 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22639.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:53:30,710 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22674.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:53:37,510 INFO [finetune.py:976] (6/7) Epoch 4, batch 5500, loss[loss=0.226, simple_loss=0.2917, pruned_loss=0.08011, over 4765.00 frames. ], tot_loss[loss=0.2205, simple_loss=0.2774, pruned_loss=0.0818, over 957946.95 frames. ], batch size: 27, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:53:43,833 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-26 03:53:48,427 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22700.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 03:53:49,004 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6794, 1.4997, 1.5568, 1.5548, 1.0519, 3.0069, 1.0966, 1.7430], device='cuda:6'), covar=tensor([0.3265, 0.2313, 0.1998, 0.2175, 0.1884, 0.0254, 0.2612, 0.1313], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0100, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 03:54:00,991 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.350e+02 1.787e+02 2.088e+02 2.580e+02 6.017e+02, threshold=4.176e+02, percent-clipped=5.0 2023-03-26 03:54:16,192 INFO [finetune.py:976] (6/7) Epoch 4, batch 5550, loss[loss=0.2405, simple_loss=0.2923, pruned_loss=0.09433, over 4782.00 frames. ], tot_loss[loss=0.2239, simple_loss=0.2807, pruned_loss=0.08354, over 956823.27 frames. ], batch size: 25, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:54:21,605 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0962, 1.8500, 2.3995, 1.5015, 2.1770, 2.3400, 1.8396, 2.4659], device='cuda:6'), covar=tensor([0.1632, 0.2136, 0.1413, 0.2250, 0.0988, 0.1640, 0.2578, 0.0969], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0204, 0.0202, 0.0195, 0.0182, 0.0223, 0.0214, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:54:23,405 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:54:55,506 INFO [finetune.py:976] (6/7) Epoch 4, batch 5600, loss[loss=0.226, simple_loss=0.2954, pruned_loss=0.07831, over 4844.00 frames. ], tot_loss[loss=0.2251, simple_loss=0.2825, pruned_loss=0.08387, over 955768.72 frames. ], batch size: 49, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:54:57,311 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22786.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:01,948 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22794.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:07,208 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-26 03:55:28,584 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.750e+02 2.147e+02 2.459e+02 4.993e+02, threshold=4.295e+02, percent-clipped=1.0 2023-03-26 03:55:35,791 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=22825.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:40,663 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22832.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:55:41,212 INFO [finetune.py:976] (6/7) Epoch 4, batch 5650, loss[loss=0.2221, simple_loss=0.2839, pruned_loss=0.08015, over 4220.00 frames. ], tot_loss[loss=0.2266, simple_loss=0.2849, pruned_loss=0.08409, over 954544.77 frames. ], batch size: 65, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:55:41,977 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:06,187 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22855.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:21,394 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22871.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:31,135 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22880.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:33,112 INFO [finetune.py:976] (6/7) Epoch 4, batch 5700, loss[loss=0.252, simple_loss=0.2768, pruned_loss=0.1136, over 4435.00 frames. ], tot_loss[loss=0.2257, simple_loss=0.2825, pruned_loss=0.08438, over 939326.88 frames. ], batch size: 19, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:56:34,972 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=22886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:56:41,776 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=22893.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:57:20,677 INFO [finetune.py:976] (6/7) Epoch 5, batch 0, loss[loss=0.2411, simple_loss=0.2988, pruned_loss=0.0917, over 4840.00 frames. ], tot_loss[loss=0.2411, simple_loss=0.2988, pruned_loss=0.0917, over 4840.00 frames. ], batch size: 49, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:57:20,677 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 03:57:30,675 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4538, 1.2117, 1.3292, 1.2939, 1.6925, 1.5874, 1.4410, 1.3000], device='cuda:6'), covar=tensor([0.0335, 0.0320, 0.0561, 0.0341, 0.0243, 0.0475, 0.0352, 0.0365], device='cuda:6'), in_proj_covar=tensor([0.0086, 0.0115, 0.0138, 0.0120, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.7570e-05, 9.0440e-05, 1.1130e-04, 9.4555e-05, 8.3660e-05, 7.5062e-05, 7.0309e-05, 8.6112e-05], device='cuda:6') 2023-03-26 03:57:37,527 INFO [finetune.py:1010] (6/7) Epoch 5, validation: loss=0.1701, simple_loss=0.2413, pruned_loss=0.0494, over 2265189.00 frames. 2023-03-26 03:57:37,527 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 03:57:47,690 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8871, 1.2834, 1.7015, 1.6407, 1.4461, 1.4843, 1.5881, 1.5884], device='cuda:6'), covar=tensor([0.5988, 0.8530, 0.7417, 0.7910, 0.8687, 0.6555, 1.0577, 0.6772], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0251, 0.0257, 0.0260, 0.0240, 0.0218, 0.0276, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:57:48,822 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22919.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:57:49,348 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.844e+01 1.616e+02 1.840e+02 2.309e+02 3.969e+02, threshold=3.680e+02, percent-clipped=0.0 2023-03-26 03:58:02,584 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=22941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:58:29,014 INFO [finetune.py:976] (6/7) Epoch 5, batch 50, loss[loss=0.2324, simple_loss=0.2898, pruned_loss=0.0875, over 4835.00 frames. ], tot_loss[loss=0.2364, simple_loss=0.2918, pruned_loss=0.09051, over 216093.52 frames. ], batch size: 47, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:59:04,513 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4685, 1.5236, 2.1279, 1.8896, 1.7805, 3.7889, 1.4270, 1.7921], device='cuda:6'), covar=tensor([0.1005, 0.1666, 0.1192, 0.1047, 0.1522, 0.0235, 0.1414, 0.1651], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 03:59:05,722 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=22995.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 03:59:17,857 INFO [finetune.py:976] (6/7) Epoch 5, batch 100, loss[loss=0.1959, simple_loss=0.2596, pruned_loss=0.06616, over 4902.00 frames. ], tot_loss[loss=0.2262, simple_loss=0.2811, pruned_loss=0.08566, over 378454.79 frames. ], batch size: 36, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 03:59:23,735 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.764e+02 2.029e+02 2.456e+02 6.922e+02, threshold=4.057e+02, percent-clipped=5.0 2023-03-26 03:59:36,986 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23041.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 03:59:41,817 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0540, 1.8963, 1.5680, 1.9010, 1.8079, 1.7103, 1.7077, 2.5961], device='cuda:6'), covar=tensor([0.7713, 0.8212, 0.6280, 0.8015, 0.6830, 0.4452, 0.7710, 0.2790], device='cuda:6'), in_proj_covar=tensor([0.0281, 0.0256, 0.0221, 0.0286, 0.0238, 0.0200, 0.0243, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 03:59:51,326 INFO [finetune.py:976] (6/7) Epoch 5, batch 150, loss[loss=0.2173, simple_loss=0.2725, pruned_loss=0.08107, over 4763.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.276, pruned_loss=0.08314, over 505682.14 frames. ], batch size: 27, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:00:09,280 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23089.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:00:28,359 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23108.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:00:30,576 INFO [finetune.py:976] (6/7) Epoch 5, batch 200, loss[loss=0.217, simple_loss=0.2723, pruned_loss=0.08091, over 4910.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2758, pruned_loss=0.0833, over 606335.16 frames. ], batch size: 35, lr: 3.95e-03, grad_scale: 64.0 2023-03-26 04:00:42,595 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.047e+02 1.662e+02 1.994e+02 2.595e+02 4.858e+02, threshold=3.989e+02, percent-clipped=3.0 2023-03-26 04:01:01,382 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:01:09,513 INFO [finetune.py:976] (6/7) Epoch 5, batch 250, loss[loss=0.2543, simple_loss=0.314, pruned_loss=0.0973, over 4818.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.281, pruned_loss=0.08498, over 685022.07 frames. ], batch size: 38, lr: 3.95e-03, grad_scale: 64.0 2023-03-26 04:01:23,294 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:01:31,047 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23181.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:01:49,651 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7880, 1.1220, 1.4671, 1.5541, 1.4008, 1.4124, 1.4571, 1.5511], device='cuda:6'), covar=tensor([0.7311, 1.0416, 0.8627, 0.9160, 1.0443, 0.7641, 1.1764, 0.7669], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0251, 0.0257, 0.0261, 0.0242, 0.0219, 0.0278, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:01:57,781 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4047, 1.5380, 1.6515, 1.7541, 1.6465, 3.4506, 1.4631, 1.6853], device='cuda:6'), covar=tensor([0.1059, 0.1707, 0.1213, 0.1093, 0.1596, 0.0225, 0.1359, 0.1578], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 04:02:00,760 INFO [finetune.py:976] (6/7) Epoch 5, batch 300, loss[loss=0.2401, simple_loss=0.2725, pruned_loss=0.1039, over 4216.00 frames. ], tot_loss[loss=0.2274, simple_loss=0.2837, pruned_loss=0.0856, over 743899.97 frames. ], batch size: 18, lr: 3.95e-03, grad_scale: 64.0 2023-03-26 04:02:11,902 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23217.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:02:20,057 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.235e+02 1.736e+02 2.142e+02 2.597e+02 5.294e+02, threshold=4.284e+02, percent-clipped=3.0 2023-03-26 04:02:21,971 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3500, 1.2931, 1.6259, 2.4226, 1.7306, 2.0389, 0.9603, 1.9336], device='cuda:6'), covar=tensor([0.1843, 0.1447, 0.1134, 0.0731, 0.0880, 0.1444, 0.1435, 0.0849], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0119, 0.0136, 0.0167, 0.0103, 0.0143, 0.0128, 0.0105], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 04:02:31,208 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:02:44,992 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8880, 1.2576, 1.4286, 1.6265, 1.4170, 1.4992, 1.5468, 1.5940], device='cuda:6'), covar=tensor([0.8315, 1.0813, 0.9621, 1.0379, 1.1709, 0.8722, 1.3240, 0.8447], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0251, 0.0257, 0.0261, 0.0242, 0.0219, 0.0277, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:02:59,195 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 04:03:03,006 INFO [finetune.py:976] (6/7) Epoch 5, batch 350, loss[loss=0.2589, simple_loss=0.3076, pruned_loss=0.1051, over 4814.00 frames. ], tot_loss[loss=0.2291, simple_loss=0.2851, pruned_loss=0.08658, over 789129.04 frames. ], batch size: 33, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:03:20,902 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23278.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:34,252 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6573, 1.4607, 2.0920, 1.3754, 1.7965, 1.9060, 1.5154, 2.1405], device='cuda:6'), covar=tensor([0.1630, 0.2481, 0.1346, 0.2157, 0.1078, 0.1627, 0.2848, 0.0937], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0205, 0.0202, 0.0197, 0.0183, 0.0223, 0.0214, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:03:34,267 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:41,706 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23295.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:03:47,632 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5415, 1.4508, 1.3704, 1.4128, 1.6762, 1.3590, 1.7251, 1.5261], device='cuda:6'), covar=tensor([0.1658, 0.2696, 0.3424, 0.2780, 0.2502, 0.1807, 0.2847, 0.2053], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0193, 0.0235, 0.0253, 0.0224, 0.0187, 0.0209, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:03:48,180 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5284, 2.1309, 2.8181, 1.8687, 2.6465, 2.8011, 2.0510, 2.9625], device='cuda:6'), covar=tensor([0.1673, 0.2338, 0.1776, 0.2636, 0.1037, 0.1717, 0.3015, 0.0872], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0205, 0.0202, 0.0196, 0.0182, 0.0223, 0.0214, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:03:51,624 INFO [finetune.py:976] (6/7) Epoch 5, batch 400, loss[loss=0.2496, simple_loss=0.296, pruned_loss=0.1016, over 4773.00 frames. ], tot_loss[loss=0.2297, simple_loss=0.2863, pruned_loss=0.08656, over 824316.80 frames. ], batch size: 51, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:03:58,187 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.903e+01 1.731e+02 2.116e+02 2.565e+02 5.981e+02, threshold=4.232e+02, percent-clipped=1.0 2023-03-26 04:04:13,598 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23343.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:04:24,905 INFO [finetune.py:976] (6/7) Epoch 5, batch 450, loss[loss=0.2275, simple_loss=0.2929, pruned_loss=0.08112, over 4898.00 frames. ], tot_loss[loss=0.2278, simple_loss=0.2845, pruned_loss=0.08553, over 855469.93 frames. ], batch size: 36, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:04:26,812 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7466, 1.7296, 1.5404, 1.8466, 2.2767, 1.8319, 1.6045, 1.3825], device='cuda:6'), covar=tensor([0.2417, 0.2203, 0.2034, 0.1930, 0.2001, 0.1308, 0.2726, 0.2008], device='cuda:6'), in_proj_covar=tensor([0.0232, 0.0209, 0.0198, 0.0184, 0.0235, 0.0174, 0.0214, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:04:59,761 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1512, 1.6294, 1.8865, 1.9450, 1.7208, 1.7604, 1.8944, 1.7843], device='cuda:6'), covar=tensor([0.6845, 0.9772, 0.7611, 0.8729, 1.0230, 0.7175, 1.1916, 0.7413], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0252, 0.0258, 0.0261, 0.0243, 0.0220, 0.0279, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:05:10,551 INFO [finetune.py:976] (6/7) Epoch 5, batch 500, loss[loss=0.3345, simple_loss=0.3631, pruned_loss=0.1529, over 4254.00 frames. ], tot_loss[loss=0.226, simple_loss=0.282, pruned_loss=0.08494, over 876992.19 frames. ], batch size: 66, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:05:16,626 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.775e+02 2.029e+02 2.615e+02 5.539e+02, threshold=4.057e+02, percent-clipped=1.0 2023-03-26 04:05:19,908 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 04:05:36,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-26 04:05:42,941 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23450.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:05:51,570 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 04:05:52,276 INFO [finetune.py:976] (6/7) Epoch 5, batch 550, loss[loss=0.2271, simple_loss=0.2825, pruned_loss=0.08587, over 4767.00 frames. ], tot_loss[loss=0.2227, simple_loss=0.2788, pruned_loss=0.08334, over 894616.74 frames. ], batch size: 27, lr: 3.95e-03, grad_scale: 32.0 2023-03-26 04:05:54,212 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23464.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:15,745 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:23,744 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=23488.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:30,290 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23498.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:38,166 INFO [finetune.py:976] (6/7) Epoch 5, batch 600, loss[loss=0.2464, simple_loss=0.3021, pruned_loss=0.09536, over 4909.00 frames. ], tot_loss[loss=0.222, simple_loss=0.2786, pruned_loss=0.08268, over 910127.87 frames. ], batch size: 37, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:06:44,764 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.759e+02 2.043e+02 2.434e+02 4.744e+02, threshold=4.086e+02, percent-clipped=2.0 2023-03-26 04:06:51,182 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23529.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:06:58,627 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 04:07:18,720 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=23549.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 04:07:23,583 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5623, 1.4490, 1.5003, 1.5511, 1.0982, 3.5294, 1.3108, 1.8402], device='cuda:6'), covar=tensor([0.3673, 0.2631, 0.2129, 0.2368, 0.2072, 0.0181, 0.2860, 0.1456], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0112, 0.0116, 0.0120, 0.0116, 0.0097, 0.0101, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:07:25,882 INFO [finetune.py:976] (6/7) Epoch 5, batch 650, loss[loss=0.217, simple_loss=0.2831, pruned_loss=0.07541, over 4864.00 frames. ], tot_loss[loss=0.2246, simple_loss=0.2821, pruned_loss=0.08359, over 921597.18 frames. ], batch size: 44, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:07:33,761 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23573.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:07:43,056 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:08:10,718 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-26 04:08:14,418 INFO [finetune.py:976] (6/7) Epoch 5, batch 700, loss[loss=0.215, simple_loss=0.2825, pruned_loss=0.07378, over 4740.00 frames. ], tot_loss[loss=0.2268, simple_loss=0.2839, pruned_loss=0.0849, over 926538.79 frames. ], batch size: 27, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:08:30,906 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.701e+02 2.127e+02 2.576e+02 5.648e+02, threshold=4.253e+02, percent-clipped=2.0 2023-03-26 04:09:25,209 INFO [finetune.py:976] (6/7) Epoch 5, batch 750, loss[loss=0.1837, simple_loss=0.2626, pruned_loss=0.0524, over 4834.00 frames. ], tot_loss[loss=0.2292, simple_loss=0.2862, pruned_loss=0.08608, over 931067.22 frames. ], batch size: 47, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:09:31,317 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3215, 1.4187, 1.2008, 1.3619, 1.5873, 1.5497, 1.4236, 1.2598], device='cuda:6'), covar=tensor([0.0341, 0.0219, 0.0496, 0.0267, 0.0185, 0.0359, 0.0250, 0.0300], device='cuda:6'), in_proj_covar=tensor([0.0087, 0.0114, 0.0139, 0.0120, 0.0105, 0.0102, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.8084e-05, 9.0106e-05, 1.1215e-04, 9.4790e-05, 8.3497e-05, 7.6190e-05, 7.0194e-05, 8.5959e-05], device='cuda:6') 2023-03-26 04:10:02,096 INFO [finetune.py:976] (6/7) Epoch 5, batch 800, loss[loss=0.2167, simple_loss=0.2766, pruned_loss=0.07841, over 4892.00 frames. ], tot_loss[loss=0.2276, simple_loss=0.285, pruned_loss=0.08515, over 935605.68 frames. ], batch size: 35, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:10:08,710 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.635e+02 1.954e+02 2.429e+02 4.773e+02, threshold=3.908e+02, percent-clipped=1.0 2023-03-26 04:10:14,569 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 04:10:56,824 INFO [finetune.py:976] (6/7) Epoch 5, batch 850, loss[loss=0.2005, simple_loss=0.2612, pruned_loss=0.06986, over 4815.00 frames. ], tot_loss[loss=0.2261, simple_loss=0.2829, pruned_loss=0.08462, over 938827.78 frames. ], batch size: 40, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:11:04,297 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23764.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:11:51,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8912, 1.8434, 1.7669, 1.8040, 1.3271, 3.7321, 1.7115, 2.2609], device='cuda:6'), covar=tensor([0.2954, 0.1994, 0.1663, 0.1937, 0.1489, 0.0159, 0.2432, 0.1125], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0112, 0.0115, 0.0120, 0.0115, 0.0097, 0.0100, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:11:54,659 INFO [finetune.py:976] (6/7) Epoch 5, batch 900, loss[loss=0.2474, simple_loss=0.2991, pruned_loss=0.09788, over 4908.00 frames. ], tot_loss[loss=0.223, simple_loss=0.2797, pruned_loss=0.08312, over 942432.52 frames. ], batch size: 36, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:11:55,341 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23812.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:11:59,621 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7847, 4.0656, 3.8399, 1.9592, 4.0874, 2.9756, 0.7635, 2.8356], device='cuda:6'), covar=tensor([0.2344, 0.1756, 0.1585, 0.3475, 0.0920, 0.1034, 0.4793, 0.1547], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0170, 0.0163, 0.0128, 0.0155, 0.0122, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 04:12:00,784 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.642e+02 1.957e+02 2.389e+02 4.840e+02, threshold=3.913e+02, percent-clipped=2.0 2023-03-26 04:12:21,817 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=23844.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:12:37,545 INFO [finetune.py:976] (6/7) Epoch 5, batch 950, loss[loss=0.1816, simple_loss=0.251, pruned_loss=0.05614, over 4752.00 frames. ], tot_loss[loss=0.2208, simple_loss=0.2774, pruned_loss=0.0821, over 946065.66 frames. ], batch size: 28, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:12:44,912 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23873.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:12:52,633 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=23885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:13:28,834 INFO [finetune.py:976] (6/7) Epoch 5, batch 1000, loss[loss=0.2397, simple_loss=0.2787, pruned_loss=0.1003, over 4798.00 frames. ], tot_loss[loss=0.2222, simple_loss=0.2794, pruned_loss=0.08247, over 950768.87 frames. ], batch size: 25, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:13:38,585 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.702e+02 2.066e+02 2.385e+02 5.722e+02, threshold=4.131e+02, percent-clipped=3.0 2023-03-26 04:13:38,656 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23921.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:13:47,709 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=23933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:13:57,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5301, 1.3875, 1.3285, 1.5543, 1.5393, 1.5540, 0.8587, 1.3181], device='cuda:6'), covar=tensor([0.2052, 0.2151, 0.1784, 0.1650, 0.1681, 0.1175, 0.2819, 0.1732], device='cuda:6'), in_proj_covar=tensor([0.0233, 0.0210, 0.0199, 0.0184, 0.0236, 0.0175, 0.0215, 0.0188], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:14:14,856 INFO [finetune.py:976] (6/7) Epoch 5, batch 1050, loss[loss=0.2424, simple_loss=0.3015, pruned_loss=0.09166, over 4835.00 frames. ], tot_loss[loss=0.2263, simple_loss=0.2835, pruned_loss=0.08456, over 950508.73 frames. ], batch size: 47, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:14:17,902 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5859, 1.6274, 1.8696, 1.8954, 1.6993, 3.1753, 1.4374, 1.7273], device='cuda:6'), covar=tensor([0.0909, 0.1414, 0.1302, 0.0896, 0.1396, 0.0274, 0.1317, 0.1432], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0082, 0.0077, 0.0080, 0.0093, 0.0084, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 04:15:19,194 INFO [finetune.py:976] (6/7) Epoch 5, batch 1100, loss[loss=0.2505, simple_loss=0.3041, pruned_loss=0.09849, over 4830.00 frames. ], tot_loss[loss=0.2277, simple_loss=0.2849, pruned_loss=0.08522, over 949171.14 frames. ], batch size: 30, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:15:28,407 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.244e+02 1.825e+02 2.109e+02 2.589e+02 5.024e+02, threshold=4.219e+02, percent-clipped=4.0 2023-03-26 04:15:38,238 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24037.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:15:50,615 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 04:15:54,059 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 04:15:54,485 INFO [finetune.py:976] (6/7) Epoch 5, batch 1150, loss[loss=0.2076, simple_loss=0.2667, pruned_loss=0.07419, over 4775.00 frames. ], tot_loss[loss=0.2274, simple_loss=0.2849, pruned_loss=0.08498, over 950858.92 frames. ], batch size: 25, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:16:13,012 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5363, 1.3498, 1.0057, 0.3490, 1.1767, 1.3782, 1.2586, 1.3256], device='cuda:6'), covar=tensor([0.0843, 0.0654, 0.1121, 0.1735, 0.1216, 0.1962, 0.1835, 0.0787], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0199, 0.0201, 0.0189, 0.0214, 0.0207, 0.0219, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:16:18,920 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:16:23,117 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.88 vs. limit=5.0 2023-03-26 04:16:28,015 INFO [finetune.py:976] (6/7) Epoch 5, batch 1200, loss[loss=0.2187, simple_loss=0.2668, pruned_loss=0.08528, over 4808.00 frames. ], tot_loss[loss=0.2256, simple_loss=0.2831, pruned_loss=0.08407, over 950753.53 frames. ], batch size: 39, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:16:37,229 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.193e+02 1.721e+02 2.129e+02 2.606e+02 7.150e+02, threshold=4.257e+02, percent-clipped=3.0 2023-03-26 04:16:47,820 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1394, 1.3527, 1.4655, 0.5695, 1.2323, 1.6356, 1.5490, 1.3485], device='cuda:6'), covar=tensor([0.0871, 0.0511, 0.0464, 0.0573, 0.0518, 0.0393, 0.0359, 0.0561], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0158, 0.0118, 0.0136, 0.0133, 0.0122, 0.0148, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.7955e-05, 1.1760e-04, 8.5904e-05, 9.9557e-05, 9.6353e-05, 9.0387e-05, 1.0978e-04, 1.0764e-04], device='cuda:6') 2023-03-26 04:16:51,872 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=24144.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:17:03,461 INFO [finetune.py:976] (6/7) Epoch 5, batch 1250, loss[loss=0.1639, simple_loss=0.2331, pruned_loss=0.04741, over 4825.00 frames. ], tot_loss[loss=0.2235, simple_loss=0.2809, pruned_loss=0.08307, over 951838.42 frames. ], batch size: 25, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:17:15,856 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-26 04:17:26,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1755, 1.8637, 2.5733, 4.2677, 2.9969, 2.7477, 0.9978, 3.4152], device='cuda:6'), covar=tensor([0.1793, 0.1601, 0.1476, 0.0567, 0.0761, 0.1587, 0.2125, 0.0539], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0119, 0.0136, 0.0167, 0.0103, 0.0143, 0.0129, 0.0105], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 04:17:28,511 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=24192.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:17:38,420 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-26 04:17:42,423 INFO [finetune.py:976] (6/7) Epoch 5, batch 1300, loss[loss=0.2039, simple_loss=0.2606, pruned_loss=0.07358, over 4832.00 frames. ], tot_loss[loss=0.2205, simple_loss=0.2776, pruned_loss=0.08171, over 954523.06 frames. ], batch size: 49, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:17:56,618 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.607e+02 1.851e+02 2.364e+02 3.844e+02, threshold=3.702e+02, percent-clipped=0.0 2023-03-26 04:17:59,185 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 04:18:34,750 INFO [finetune.py:976] (6/7) Epoch 5, batch 1350, loss[loss=0.2087, simple_loss=0.2674, pruned_loss=0.07501, over 4865.00 frames. ], tot_loss[loss=0.2197, simple_loss=0.2771, pruned_loss=0.08116, over 955742.59 frames. ], batch size: 31, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:18:37,178 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6192, 1.6264, 1.6232, 0.8748, 1.6445, 1.9207, 1.8021, 1.4634], device='cuda:6'), covar=tensor([0.1037, 0.0620, 0.0477, 0.0622, 0.0392, 0.0602, 0.0327, 0.0635], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0158, 0.0119, 0.0136, 0.0133, 0.0122, 0.0147, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.7736e-05, 1.1704e-04, 8.6131e-05, 9.9234e-05, 9.5861e-05, 9.0550e-05, 1.0903e-04, 1.0729e-04], device='cuda:6') 2023-03-26 04:18:43,163 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.95 vs. limit=5.0 2023-03-26 04:18:44,884 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.67 vs. limit=2.0 2023-03-26 04:19:12,815 INFO [finetune.py:976] (6/7) Epoch 5, batch 1400, loss[loss=0.2559, simple_loss=0.3126, pruned_loss=0.09961, over 4807.00 frames. ], tot_loss[loss=0.2229, simple_loss=0.2805, pruned_loss=0.08268, over 956606.26 frames. ], batch size: 38, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:19:21,620 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.173e+02 1.714e+02 2.138e+02 2.571e+02 4.877e+02, threshold=4.276e+02, percent-clipped=6.0 2023-03-26 04:19:56,901 INFO [finetune.py:976] (6/7) Epoch 5, batch 1450, loss[loss=0.2121, simple_loss=0.2635, pruned_loss=0.08033, over 3180.00 frames. ], tot_loss[loss=0.2265, simple_loss=0.2841, pruned_loss=0.08449, over 952445.12 frames. ], batch size: 13, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:20:20,205 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:20:23,287 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6695, 1.4479, 1.3578, 1.3630, 1.8823, 1.8984, 1.6575, 1.3984], device='cuda:6'), covar=tensor([0.0266, 0.0375, 0.0533, 0.0376, 0.0199, 0.0395, 0.0266, 0.0376], device='cuda:6'), in_proj_covar=tensor([0.0087, 0.0114, 0.0140, 0.0119, 0.0106, 0.0101, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.8143e-05, 8.9612e-05, 1.1235e-04, 9.4151e-05, 8.3916e-05, 7.5574e-05, 7.0094e-05, 8.5924e-05], device='cuda:6') 2023-03-26 04:20:29,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3232, 3.7809, 3.9201, 4.1709, 4.0585, 3.8070, 4.3901, 1.3036], device='cuda:6'), covar=tensor([0.0691, 0.0700, 0.0675, 0.0830, 0.1133, 0.1311, 0.0566, 0.4824], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0244, 0.0273, 0.0291, 0.0337, 0.0284, 0.0305, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:20:31,768 INFO [finetune.py:976] (6/7) Epoch 5, batch 1500, loss[loss=0.2225, simple_loss=0.2803, pruned_loss=0.08239, over 4931.00 frames. ], tot_loss[loss=0.2271, simple_loss=0.2847, pruned_loss=0.08473, over 951993.39 frames. ], batch size: 33, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:20:38,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.776e+02 2.138e+02 2.564e+02 4.291e+02, threshold=4.276e+02, percent-clipped=1.0 2023-03-26 04:21:13,451 INFO [finetune.py:976] (6/7) Epoch 5, batch 1550, loss[loss=0.2339, simple_loss=0.2978, pruned_loss=0.08503, over 4842.00 frames. ], tot_loss[loss=0.2272, simple_loss=0.285, pruned_loss=0.08471, over 953099.80 frames. ], batch size: 44, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:21:25,153 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0741, 0.8524, 0.9025, 1.1427, 1.2030, 1.1682, 1.0350, 1.0053], device='cuda:6'), covar=tensor([0.0288, 0.0314, 0.0594, 0.0259, 0.0240, 0.0440, 0.0302, 0.0343], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0114, 0.0141, 0.0119, 0.0106, 0.0102, 0.0092, 0.0111], device='cuda:6'), out_proj_covar=tensor([6.8482e-05, 8.9996e-05, 1.1320e-04, 9.4435e-05, 8.4161e-05, 7.5913e-05, 7.0340e-05, 8.6412e-05], device='cuda:6') 2023-03-26 04:21:47,122 INFO [finetune.py:976] (6/7) Epoch 5, batch 1600, loss[loss=0.1895, simple_loss=0.2416, pruned_loss=0.06873, over 4829.00 frames. ], tot_loss[loss=0.2247, simple_loss=0.2826, pruned_loss=0.08339, over 955247.31 frames. ], batch size: 30, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:21:58,801 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.205e+02 1.770e+02 2.018e+02 2.552e+02 5.194e+02, threshold=4.037e+02, percent-clipped=4.0 2023-03-26 04:22:21,129 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:22:33,857 INFO [finetune.py:976] (6/7) Epoch 5, batch 1650, loss[loss=0.2451, simple_loss=0.2989, pruned_loss=0.0956, over 4908.00 frames. ], tot_loss[loss=0.2218, simple_loss=0.2796, pruned_loss=0.08201, over 954630.72 frames. ], batch size: 32, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:22:42,169 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-26 04:22:43,195 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:23:17,418 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24608.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:23:22,440 INFO [finetune.py:976] (6/7) Epoch 5, batch 1700, loss[loss=0.1878, simple_loss=0.2522, pruned_loss=0.06166, over 4753.00 frames. ], tot_loss[loss=0.2185, simple_loss=0.2757, pruned_loss=0.08064, over 955450.01 frames. ], batch size: 26, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:23:31,255 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.673e+02 1.915e+02 2.251e+02 4.027e+02, threshold=3.830e+02, percent-clipped=0.0 2023-03-26 04:23:42,118 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24630.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:23:42,188 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 04:24:06,427 INFO [finetune.py:976] (6/7) Epoch 5, batch 1750, loss[loss=0.257, simple_loss=0.3144, pruned_loss=0.09977, over 4905.00 frames. ], tot_loss[loss=0.2194, simple_loss=0.2772, pruned_loss=0.0808, over 955319.61 frames. ], batch size: 37, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:24:28,000 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=24693.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:24:31,600 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8439, 1.1050, 1.5860, 1.5930, 1.4464, 1.4544, 1.4821, 1.4523], device='cuda:6'), covar=tensor([0.5630, 0.8012, 0.6346, 0.7266, 0.8212, 0.6022, 0.8932, 0.6092], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0252, 0.0257, 0.0261, 0.0243, 0.0220, 0.0277, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:24:39,306 INFO [finetune.py:976] (6/7) Epoch 5, batch 1800, loss[loss=0.2648, simple_loss=0.3201, pruned_loss=0.1047, over 4758.00 frames. ], tot_loss[loss=0.2223, simple_loss=0.2806, pruned_loss=0.08203, over 956048.57 frames. ], batch size: 54, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:24:41,670 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-26 04:24:45,830 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.243e+02 1.800e+02 2.166e+02 2.491e+02 4.201e+02, threshold=4.331e+02, percent-clipped=2.0 2023-03-26 04:24:59,914 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=24741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:25:12,942 INFO [finetune.py:976] (6/7) Epoch 5, batch 1850, loss[loss=0.2347, simple_loss=0.3038, pruned_loss=0.08282, over 4889.00 frames. ], tot_loss[loss=0.2237, simple_loss=0.2826, pruned_loss=0.08238, over 955308.16 frames. ], batch size: 43, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:25:15,503 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:25:31,047 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1135, 2.0034, 1.6001, 2.1715, 2.1006, 1.7967, 2.5287, 2.0841], device='cuda:6'), covar=tensor([0.1822, 0.3385, 0.4264, 0.3648, 0.3150, 0.2081, 0.3619, 0.2538], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0195, 0.0238, 0.0256, 0.0229, 0.0190, 0.0211, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:25:32,266 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 04:25:46,396 INFO [finetune.py:976] (6/7) Epoch 5, batch 1900, loss[loss=0.2319, simple_loss=0.2926, pruned_loss=0.08562, over 4820.00 frames. ], tot_loss[loss=0.225, simple_loss=0.2841, pruned_loss=0.08298, over 955219.86 frames. ], batch size: 47, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:25:52,457 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.802e+02 2.061e+02 2.489e+02 6.200e+02, threshold=4.122e+02, percent-clipped=1.0 2023-03-26 04:25:54,311 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9297, 1.2329, 0.9510, 1.6428, 2.1542, 1.5838, 1.6332, 1.7890], device='cuda:6'), covar=tensor([0.1420, 0.2047, 0.2155, 0.1210, 0.1934, 0.2021, 0.1433, 0.1988], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0099, 0.0117, 0.0094, 0.0125, 0.0097, 0.0101, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 04:25:57,965 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24826.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:26:20,827 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:26:29,631 INFO [finetune.py:976] (6/7) Epoch 5, batch 1950, loss[loss=0.2014, simple_loss=0.2604, pruned_loss=0.07115, over 4763.00 frames. ], tot_loss[loss=0.224, simple_loss=0.2831, pruned_loss=0.08242, over 954597.67 frames. ], batch size: 28, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:26:57,652 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24903.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:27:01,815 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=24909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:27:02,944 INFO [finetune.py:976] (6/7) Epoch 5, batch 2000, loss[loss=0.1992, simple_loss=0.2605, pruned_loss=0.06894, over 4785.00 frames. ], tot_loss[loss=0.2214, simple_loss=0.2801, pruned_loss=0.08137, over 956845.24 frames. ], batch size: 29, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:27:13,001 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.611e+02 2.012e+02 2.424e+02 3.709e+02, threshold=4.024e+02, percent-clipped=0.0 2023-03-26 04:27:16,018 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=24925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:27:48,690 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7345, 1.1415, 0.7122, 1.5955, 2.2025, 1.5266, 1.4964, 1.8670], device='cuda:6'), covar=tensor([0.1584, 0.2146, 0.2345, 0.1277, 0.2001, 0.1979, 0.1525, 0.1901], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0116, 0.0093, 0.0124, 0.0096, 0.0100, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 04:27:50,327 INFO [finetune.py:976] (6/7) Epoch 5, batch 2050, loss[loss=0.2162, simple_loss=0.268, pruned_loss=0.08222, over 4899.00 frames. ], tot_loss[loss=0.2203, simple_loss=0.2783, pruned_loss=0.08112, over 957840.87 frames. ], batch size: 32, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:27:59,682 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-26 04:28:12,193 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=24995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:28:17,487 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6135, 2.1887, 1.6724, 0.7805, 1.9051, 2.2075, 1.8411, 1.9515], device='cuda:6'), covar=tensor([0.0783, 0.0801, 0.1544, 0.2115, 0.1292, 0.1776, 0.2146, 0.0855], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0203, 0.0190, 0.0215, 0.0208, 0.0220, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:28:23,337 INFO [finetune.py:976] (6/7) Epoch 5, batch 2100, loss[loss=0.1979, simple_loss=0.2661, pruned_loss=0.06488, over 4811.00 frames. ], tot_loss[loss=0.2177, simple_loss=0.2756, pruned_loss=0.0799, over 956456.45 frames. ], batch size: 41, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:28:39,039 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.247e+01 1.663e+02 1.989e+02 2.457e+02 4.446e+02, threshold=3.978e+02, percent-clipped=1.0 2023-03-26 04:28:43,205 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 04:28:50,800 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 04:29:08,285 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25056.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:29:11,161 INFO [finetune.py:976] (6/7) Epoch 5, batch 2150, loss[loss=0.1701, simple_loss=0.2298, pruned_loss=0.05518, over 4303.00 frames. ], tot_loss[loss=0.2203, simple_loss=0.2787, pruned_loss=0.08091, over 956241.97 frames. ], batch size: 19, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:29:12,990 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1433, 2.6630, 2.6377, 1.3100, 2.8211, 1.9910, 0.7297, 1.8165], device='cuda:6'), covar=tensor([0.2088, 0.2014, 0.1533, 0.3004, 0.1179, 0.1157, 0.3808, 0.1513], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0172, 0.0165, 0.0129, 0.0156, 0.0123, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 04:29:45,150 INFO [finetune.py:976] (6/7) Epoch 5, batch 2200, loss[loss=0.3054, simple_loss=0.3536, pruned_loss=0.1286, over 4249.00 frames. ], tot_loss[loss=0.2255, simple_loss=0.2836, pruned_loss=0.08373, over 955220.26 frames. ], batch size: 66, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:29:52,261 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.691e+02 1.983e+02 2.301e+02 4.176e+02, threshold=3.967e+02, percent-clipped=1.0 2023-03-26 04:29:52,344 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25121.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:29:54,188 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3044, 1.5923, 0.8453, 2.1447, 2.5349, 1.7840, 2.1035, 2.1574], device='cuda:6'), covar=tensor([0.1329, 0.1882, 0.2170, 0.1075, 0.1759, 0.1966, 0.1275, 0.1799], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0116, 0.0093, 0.0123, 0.0096, 0.0100, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 04:29:56,636 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5962, 1.4772, 1.4507, 1.5649, 1.1715, 3.1026, 1.3358, 1.8687], device='cuda:6'), covar=tensor([0.3788, 0.2494, 0.2219, 0.2504, 0.1931, 0.0237, 0.2603, 0.1311], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0114, 0.0117, 0.0121, 0.0117, 0.0098, 0.0101, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:30:00,392 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7590, 0.8982, 1.6194, 1.5536, 1.4438, 1.4384, 1.4037, 1.4587], device='cuda:6'), covar=tensor([0.5417, 0.7822, 0.6240, 0.7060, 0.7794, 0.5693, 0.8522, 0.5886], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0249, 0.0255, 0.0259, 0.0241, 0.0218, 0.0274, 0.0222], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:30:12,311 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8259, 1.6858, 2.3279, 1.4472, 2.0783, 2.1436, 1.6022, 2.3096], device='cuda:6'), covar=tensor([0.1616, 0.2254, 0.1464, 0.2377, 0.0989, 0.1663, 0.2715, 0.0952], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0207, 0.0202, 0.0197, 0.0184, 0.0224, 0.0217, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:30:18,607 INFO [finetune.py:976] (6/7) Epoch 5, batch 2250, loss[loss=0.2395, simple_loss=0.301, pruned_loss=0.08895, over 4809.00 frames. ], tot_loss[loss=0.2285, simple_loss=0.2865, pruned_loss=0.08523, over 954760.79 frames. ], batch size: 38, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:30:18,845 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 04:30:46,343 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25203.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:30:47,789 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:30:53,042 INFO [finetune.py:976] (6/7) Epoch 5, batch 2300, loss[loss=0.2268, simple_loss=0.2807, pruned_loss=0.08641, over 4813.00 frames. ], tot_loss[loss=0.2267, simple_loss=0.2854, pruned_loss=0.08399, over 953543.24 frames. ], batch size: 33, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:31:05,174 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.234e+02 1.840e+02 2.117e+02 2.638e+02 5.911e+02, threshold=4.234e+02, percent-clipped=5.0 2023-03-26 04:31:14,024 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:31:15,873 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4136, 1.5173, 1.6007, 0.9742, 1.4276, 1.7768, 1.8036, 1.3727], device='cuda:6'), covar=tensor([0.0882, 0.0488, 0.0401, 0.0501, 0.0419, 0.0423, 0.0245, 0.0570], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0159, 0.0119, 0.0137, 0.0134, 0.0124, 0.0149, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.8586e-05, 1.1805e-04, 8.6524e-05, 1.0053e-04, 9.6863e-05, 9.1513e-05, 1.1051e-04, 1.0818e-04], device='cuda:6') 2023-03-26 04:31:35,993 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:31:42,890 INFO [finetune.py:976] (6/7) Epoch 5, batch 2350, loss[loss=0.2129, simple_loss=0.2645, pruned_loss=0.08067, over 4815.00 frames. ], tot_loss[loss=0.2246, simple_loss=0.283, pruned_loss=0.08307, over 953404.80 frames. ], batch size: 51, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:31:51,256 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:31:55,960 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2029, 1.9295, 1.9683, 0.7601, 2.0948, 2.4859, 2.0254, 1.9226], device='cuda:6'), covar=tensor([0.0921, 0.0722, 0.0566, 0.0793, 0.0744, 0.0381, 0.0468, 0.0597], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0160, 0.0120, 0.0138, 0.0135, 0.0124, 0.0150, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.9137e-05, 1.1882e-04, 8.7195e-05, 1.0097e-04, 9.7470e-05, 9.1962e-05, 1.1122e-04, 1.0892e-04], device='cuda:6') 2023-03-26 04:31:59,572 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1781, 1.8623, 1.3808, 0.5910, 1.5958, 1.7908, 1.4989, 1.7173], device='cuda:6'), covar=tensor([0.0820, 0.0844, 0.1694, 0.2313, 0.1510, 0.2684, 0.2636, 0.0975], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0202, 0.0204, 0.0190, 0.0218, 0.0211, 0.0222, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:32:16,766 INFO [finetune.py:976] (6/7) Epoch 5, batch 2400, loss[loss=0.1981, simple_loss=0.2599, pruned_loss=0.06817, over 4827.00 frames. ], tot_loss[loss=0.2211, simple_loss=0.2793, pruned_loss=0.08148, over 954974.80 frames. ], batch size: 38, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:32:18,157 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9284, 1.7166, 1.6355, 1.8878, 2.4965, 2.0056, 1.5791, 1.5505], device='cuda:6'), covar=tensor([0.2458, 0.2439, 0.2156, 0.2004, 0.1904, 0.1228, 0.2943, 0.2105], device='cuda:6'), in_proj_covar=tensor([0.0232, 0.0208, 0.0198, 0.0184, 0.0234, 0.0174, 0.0214, 0.0187], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:32:23,860 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.631e+02 1.900e+02 2.318e+02 5.058e+02, threshold=3.799e+02, percent-clipped=1.0 2023-03-26 04:32:30,630 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 04:32:58,159 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25351.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 04:33:04,183 INFO [finetune.py:976] (6/7) Epoch 5, batch 2450, loss[loss=0.2062, simple_loss=0.2693, pruned_loss=0.0715, over 4921.00 frames. ], tot_loss[loss=0.2183, simple_loss=0.2759, pruned_loss=0.08032, over 955527.81 frames. ], batch size: 36, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:33:28,314 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0992, 2.3319, 2.1206, 1.5639, 2.3472, 2.3085, 2.1642, 1.9510], device='cuda:6'), covar=tensor([0.0735, 0.0592, 0.0792, 0.1027, 0.0440, 0.0797, 0.0737, 0.0988], device='cuda:6'), in_proj_covar=tensor([0.0138, 0.0133, 0.0143, 0.0127, 0.0110, 0.0142, 0.0144, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:33:46,834 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4912, 1.3322, 1.3895, 1.4467, 1.1145, 2.8477, 1.1900, 1.6188], device='cuda:6'), covar=tensor([0.3439, 0.2520, 0.2138, 0.2280, 0.1881, 0.0266, 0.2819, 0.1324], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0114, 0.0117, 0.0121, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:34:02,098 INFO [finetune.py:976] (6/7) Epoch 5, batch 2500, loss[loss=0.2282, simple_loss=0.2975, pruned_loss=0.07942, over 4854.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2785, pruned_loss=0.08193, over 955440.43 frames. ], batch size: 44, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:34:08,099 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5295, 1.2515, 1.8733, 3.1691, 2.1306, 2.4464, 0.8712, 2.6670], device='cuda:6'), covar=tensor([0.2255, 0.2316, 0.1759, 0.1049, 0.1039, 0.1557, 0.2517, 0.0801], device='cuda:6'), in_proj_covar=tensor([0.0105, 0.0119, 0.0137, 0.0167, 0.0103, 0.0144, 0.0129, 0.0105], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 04:34:18,833 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.793e+02 2.115e+02 2.620e+02 5.379e+02, threshold=4.229e+02, percent-clipped=6.0 2023-03-26 04:34:18,925 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:34:26,150 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=25428.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:34:30,875 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2022, 2.8621, 2.6901, 1.1997, 2.9204, 2.1847, 0.6658, 1.8973], device='cuda:6'), covar=tensor([0.2724, 0.2446, 0.1889, 0.3845, 0.1433, 0.1178, 0.4330, 0.1838], device='cuda:6'), in_proj_covar=tensor([0.0157, 0.0172, 0.0164, 0.0129, 0.0156, 0.0123, 0.0147, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 04:34:47,668 INFO [finetune.py:976] (6/7) Epoch 5, batch 2550, loss[loss=0.2177, simple_loss=0.2773, pruned_loss=0.07901, over 4820.00 frames. ], tot_loss[loss=0.2231, simple_loss=0.2809, pruned_loss=0.08264, over 955025.50 frames. ], batch size: 33, lr: 3.94e-03, grad_scale: 64.0 2023-03-26 04:34:49,023 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8343, 1.2541, 1.6536, 1.6101, 1.4562, 1.5040, 1.5610, 1.5092], device='cuda:6'), covar=tensor([0.5743, 0.8128, 0.6367, 0.7581, 0.8773, 0.6220, 0.9141, 0.6332], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0250, 0.0255, 0.0260, 0.0243, 0.0220, 0.0275, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:34:53,577 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25469.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:35:07,252 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25489.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 04:35:11,460 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0139, 1.9063, 1.5244, 1.7983, 1.8416, 1.7297, 1.7262, 2.5563], device='cuda:6'), covar=tensor([0.6989, 0.7344, 0.5693, 0.7320, 0.5897, 0.3942, 0.6783, 0.2461], device='cuda:6'), in_proj_covar=tensor([0.0283, 0.0257, 0.0223, 0.0286, 0.0239, 0.0201, 0.0244, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:35:16,169 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:35:20,835 INFO [finetune.py:976] (6/7) Epoch 5, batch 2600, loss[loss=0.2959, simple_loss=0.3323, pruned_loss=0.1298, over 4862.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.2826, pruned_loss=0.08311, over 955611.28 frames. ], batch size: 44, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:35:28,040 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.693e+02 2.088e+02 2.425e+02 4.415e+02, threshold=4.177e+02, percent-clipped=1.0 2023-03-26 04:35:48,653 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:35:54,563 INFO [finetune.py:976] (6/7) Epoch 5, batch 2650, loss[loss=0.2923, simple_loss=0.3323, pruned_loss=0.1261, over 4810.00 frames. ], tot_loss[loss=0.2246, simple_loss=0.2834, pruned_loss=0.08295, over 956796.59 frames. ], batch size: 45, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:36:30,079 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 04:36:33,947 INFO [finetune.py:976] (6/7) Epoch 5, batch 2700, loss[loss=0.2092, simple_loss=0.2634, pruned_loss=0.07749, over 4808.00 frames. ], tot_loss[loss=0.2249, simple_loss=0.2839, pruned_loss=0.08293, over 959398.36 frames. ], batch size: 40, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:36:43,927 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 04:36:50,928 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.205e+02 1.711e+02 2.002e+02 2.331e+02 3.948e+02, threshold=4.004e+02, percent-clipped=0.0 2023-03-26 04:37:26,221 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=25651.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:37:32,346 INFO [finetune.py:976] (6/7) Epoch 5, batch 2750, loss[loss=0.2116, simple_loss=0.2808, pruned_loss=0.07115, over 4818.00 frames. ], tot_loss[loss=0.221, simple_loss=0.2799, pruned_loss=0.08106, over 960011.23 frames. ], batch size: 40, lr: 3.94e-03, grad_scale: 32.0 2023-03-26 04:37:41,808 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 04:37:58,592 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=25699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:38:07,651 INFO [finetune.py:976] (6/7) Epoch 5, batch 2800, loss[loss=0.1707, simple_loss=0.2348, pruned_loss=0.05335, over 4901.00 frames. ], tot_loss[loss=0.2178, simple_loss=0.2763, pruned_loss=0.07972, over 960340.97 frames. ], batch size: 35, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:38:23,887 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.133e+02 1.611e+02 1.888e+02 2.301e+02 3.388e+02, threshold=3.776e+02, percent-clipped=0.0 2023-03-26 04:39:01,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9666, 3.8469, 3.6193, 1.8354, 3.9723, 2.9163, 1.1234, 2.6111], device='cuda:6'), covar=tensor([0.2589, 0.2295, 0.1477, 0.3584, 0.0947, 0.1139, 0.4541, 0.1740], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0172, 0.0164, 0.0129, 0.0156, 0.0123, 0.0146, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 04:39:03,402 INFO [finetune.py:976] (6/7) Epoch 5, batch 2850, loss[loss=0.2536, simple_loss=0.3068, pruned_loss=0.1002, over 4899.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.274, pruned_loss=0.07871, over 961026.64 frames. ], batch size: 36, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:39:18,523 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=25784.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 04:39:37,536 INFO [finetune.py:976] (6/7) Epoch 5, batch 2900, loss[loss=0.1833, simple_loss=0.25, pruned_loss=0.05832, over 4755.00 frames. ], tot_loss[loss=0.2194, simple_loss=0.2777, pruned_loss=0.08049, over 958327.23 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:39:44,764 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.812e+02 2.065e+02 2.463e+02 5.082e+02, threshold=4.130e+02, percent-clipped=4.0 2023-03-26 04:40:02,006 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.9716, 4.2968, 4.5140, 4.7941, 4.6201, 4.4149, 5.0830, 1.6054], device='cuda:6'), covar=tensor([0.0824, 0.0897, 0.0757, 0.0984, 0.1337, 0.1532, 0.0533, 0.5453], device='cuda:6'), in_proj_covar=tensor([0.0356, 0.0244, 0.0273, 0.0292, 0.0337, 0.0282, 0.0303, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:40:09,052 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=25858.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:40:10,732 INFO [finetune.py:976] (6/7) Epoch 5, batch 2950, loss[loss=0.2132, simple_loss=0.2801, pruned_loss=0.0732, over 4928.00 frames. ], tot_loss[loss=0.2214, simple_loss=0.2803, pruned_loss=0.08123, over 957360.81 frames. ], batch size: 38, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:40:16,940 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8275, 1.5826, 1.4812, 1.4262, 1.8466, 1.5761, 1.7938, 1.7234], device='cuda:6'), covar=tensor([0.1687, 0.3084, 0.4042, 0.3136, 0.3106, 0.1944, 0.3373, 0.2433], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0192, 0.0234, 0.0253, 0.0228, 0.0187, 0.0209, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:40:26,615 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 04:40:36,967 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 04:40:43,942 INFO [finetune.py:976] (6/7) Epoch 5, batch 3000, loss[loss=0.2021, simple_loss=0.2738, pruned_loss=0.06521, over 4916.00 frames. ], tot_loss[loss=0.2232, simple_loss=0.2822, pruned_loss=0.08213, over 956255.27 frames. ], batch size: 37, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:40:43,942 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 04:40:54,557 INFO [finetune.py:1010] (6/7) Epoch 5, validation: loss=0.1652, simple_loss=0.2371, pruned_loss=0.04667, over 2265189.00 frames. 2023-03-26 04:40:54,557 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6261MB 2023-03-26 04:41:00,088 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=25919.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:41:01,812 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.739e+02 2.096e+02 2.435e+02 4.160e+02, threshold=4.193e+02, percent-clipped=2.0 2023-03-26 04:41:09,796 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4104, 1.5484, 1.7304, 0.9824, 1.6296, 1.9110, 1.9037, 1.4822], device='cuda:6'), covar=tensor([0.1090, 0.0706, 0.0562, 0.0609, 0.0527, 0.0625, 0.0364, 0.0771], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0160, 0.0120, 0.0138, 0.0134, 0.0124, 0.0149, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.9199e-05, 1.1824e-04, 8.6792e-05, 1.0083e-04, 9.6988e-05, 9.1787e-05, 1.1103e-04, 1.0879e-04], device='cuda:6') 2023-03-26 04:41:27,995 INFO [finetune.py:976] (6/7) Epoch 5, batch 3050, loss[loss=0.1809, simple_loss=0.2549, pruned_loss=0.05343, over 4766.00 frames. ], tot_loss[loss=0.2225, simple_loss=0.2819, pruned_loss=0.08156, over 955585.29 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:41:46,808 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-26 04:42:08,230 INFO [finetune.py:976] (6/7) Epoch 5, batch 3100, loss[loss=0.239, simple_loss=0.2577, pruned_loss=0.1101, over 3963.00 frames. ], tot_loss[loss=0.2207, simple_loss=0.2797, pruned_loss=0.0808, over 954321.60 frames. ], batch size: 17, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:42:08,369 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4585, 1.6475, 1.7285, 1.0793, 1.5643, 1.9293, 1.9611, 1.4839], device='cuda:6'), covar=tensor([0.1190, 0.0597, 0.0559, 0.0600, 0.0595, 0.0524, 0.0298, 0.0772], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0160, 0.0120, 0.0138, 0.0134, 0.0124, 0.0149, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.9344e-05, 1.1845e-04, 8.6796e-05, 1.0096e-04, 9.6847e-05, 9.1591e-05, 1.1095e-04, 1.0874e-04], device='cuda:6') 2023-03-26 04:42:25,412 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.603e+02 1.879e+02 2.413e+02 4.411e+02, threshold=3.758e+02, percent-clipped=2.0 2023-03-26 04:43:10,314 INFO [finetune.py:976] (6/7) Epoch 5, batch 3150, loss[loss=0.2171, simple_loss=0.272, pruned_loss=0.08113, over 4914.00 frames. ], tot_loss[loss=0.2192, simple_loss=0.2776, pruned_loss=0.08045, over 955630.23 frames. ], batch size: 46, lr: 3.93e-03, grad_scale: 32.0 2023-03-26 04:43:30,920 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26084.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:43:42,303 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:43:49,557 INFO [finetune.py:976] (6/7) Epoch 5, batch 3200, loss[loss=0.2409, simple_loss=0.2879, pruned_loss=0.09694, over 4919.00 frames. ], tot_loss[loss=0.2166, simple_loss=0.2746, pruned_loss=0.07934, over 956041.77 frames. ], batch size: 37, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:43:58,345 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.599e+02 1.999e+02 2.453e+02 4.323e+02, threshold=3.997e+02, percent-clipped=1.0 2023-03-26 04:44:04,859 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26132.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:44:37,841 INFO [finetune.py:976] (6/7) Epoch 5, batch 3250, loss[loss=0.2617, simple_loss=0.3162, pruned_loss=0.1036, over 4867.00 frames. ], tot_loss[loss=0.2185, simple_loss=0.2761, pruned_loss=0.08049, over 956312.57 frames. ], batch size: 44, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:44:43,927 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26163.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:44:47,985 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26168.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:44:56,589 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2904, 3.7160, 3.8455, 4.1603, 4.0163, 3.7112, 4.3826, 1.3569], device='cuda:6'), covar=tensor([0.0747, 0.0822, 0.0810, 0.0778, 0.1184, 0.1505, 0.0617, 0.4900], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0243, 0.0272, 0.0289, 0.0335, 0.0281, 0.0301, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:45:36,619 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7796, 1.7454, 1.4205, 1.7118, 1.6936, 1.6047, 1.6753, 2.4082], device='cuda:6'), covar=tensor([0.7460, 0.7411, 0.5773, 0.7817, 0.6778, 0.4077, 0.7237, 0.2658], device='cuda:6'), in_proj_covar=tensor([0.0280, 0.0254, 0.0219, 0.0281, 0.0237, 0.0199, 0.0242, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:45:40,774 INFO [finetune.py:976] (6/7) Epoch 5, batch 3300, loss[loss=0.2217, simple_loss=0.2952, pruned_loss=0.07413, over 4818.00 frames. ], tot_loss[loss=0.222, simple_loss=0.2799, pruned_loss=0.08205, over 954438.29 frames. ], batch size: 51, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:45:47,651 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26214.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 04:46:00,062 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.721e+02 2.149e+02 2.490e+02 4.939e+02, threshold=4.298e+02, percent-clipped=1.0 2023-03-26 04:46:06,002 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:46:19,836 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:46:29,546 INFO [finetune.py:976] (6/7) Epoch 5, batch 3350, loss[loss=0.1961, simple_loss=0.2563, pruned_loss=0.06794, over 4840.00 frames. ], tot_loss[loss=0.2242, simple_loss=0.2829, pruned_loss=0.08274, over 956557.01 frames. ], batch size: 44, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:46:30,318 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.74 vs. limit=5.0 2023-03-26 04:47:12,174 INFO [finetune.py:976] (6/7) Epoch 5, batch 3400, loss[loss=0.1985, simple_loss=0.2654, pruned_loss=0.0658, over 4810.00 frames. ], tot_loss[loss=0.2254, simple_loss=0.2846, pruned_loss=0.08309, over 958279.55 frames. ], batch size: 39, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:47:12,289 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26311.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:47:19,912 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.624e+02 1.904e+02 2.361e+02 4.543e+02, threshold=3.807e+02, percent-clipped=1.0 2023-03-26 04:47:25,763 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0578, 2.6792, 2.4709, 1.4672, 2.5827, 2.1662, 1.9472, 2.1773], device='cuda:6'), covar=tensor([0.1092, 0.0951, 0.1795, 0.2368, 0.2031, 0.2383, 0.2342, 0.1509], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0202, 0.0204, 0.0192, 0.0218, 0.0210, 0.0223, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:47:29,410 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5350, 1.4276, 1.3735, 1.3722, 1.0396, 3.2556, 1.3101, 1.8162], device='cuda:6'), covar=tensor([0.4411, 0.3112, 0.2472, 0.3078, 0.2087, 0.0242, 0.2778, 0.1309], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0114, 0.0117, 0.0121, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:47:53,150 INFO [finetune.py:976] (6/7) Epoch 5, batch 3450, loss[loss=0.2192, simple_loss=0.2798, pruned_loss=0.07927, over 4908.00 frames. ], tot_loss[loss=0.2238, simple_loss=0.2829, pruned_loss=0.08233, over 956387.32 frames. ], batch size: 46, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:48:37,960 INFO [finetune.py:976] (6/7) Epoch 5, batch 3500, loss[loss=0.209, simple_loss=0.2673, pruned_loss=0.07537, over 4911.00 frames. ], tot_loss[loss=0.2202, simple_loss=0.2793, pruned_loss=0.08053, over 957828.87 frames. ], batch size: 36, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:48:43,885 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26414.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:48:51,309 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.234e+02 1.586e+02 1.962e+02 2.229e+02 4.326e+02, threshold=3.925e+02, percent-clipped=1.0 2023-03-26 04:49:01,360 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 04:49:23,749 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:49:25,518 INFO [finetune.py:976] (6/7) Epoch 5, batch 3550, loss[loss=0.1653, simple_loss=0.2346, pruned_loss=0.04802, over 4792.00 frames. ], tot_loss[loss=0.2171, simple_loss=0.2754, pruned_loss=0.07934, over 954461.67 frames. ], batch size: 29, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:49:26,257 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9985, 1.8644, 1.7254, 2.0155, 1.4087, 4.5515, 1.6641, 2.3593], device='cuda:6'), covar=tensor([0.3210, 0.2330, 0.1983, 0.2196, 0.1777, 0.0086, 0.2446, 0.1216], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0113, 0.0116, 0.0121, 0.0117, 0.0097, 0.0100, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:49:34,660 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26475.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:50:11,930 INFO [finetune.py:976] (6/7) Epoch 5, batch 3600, loss[loss=0.2277, simple_loss=0.286, pruned_loss=0.08475, over 4863.00 frames. ], tot_loss[loss=0.2159, simple_loss=0.2734, pruned_loss=0.07918, over 954694.62 frames. ], batch size: 34, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:50:13,870 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26514.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:50:19,777 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.118e+02 1.588e+02 1.920e+02 2.290e+02 3.397e+02, threshold=3.841e+02, percent-clipped=0.0 2023-03-26 04:50:20,510 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:50:55,028 INFO [finetune.py:976] (6/7) Epoch 5, batch 3650, loss[loss=0.2516, simple_loss=0.3033, pruned_loss=0.09991, over 4842.00 frames. ], tot_loss[loss=0.218, simple_loss=0.2755, pruned_loss=0.08021, over 954111.27 frames. ], batch size: 30, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:50:55,700 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26562.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:51:31,368 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:51:34,853 INFO [finetune.py:976] (6/7) Epoch 5, batch 3700, loss[loss=0.232, simple_loss=0.3062, pruned_loss=0.07894, over 4805.00 frames. ], tot_loss[loss=0.2214, simple_loss=0.2793, pruned_loss=0.0817, over 953984.88 frames. ], batch size: 45, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:51:42,594 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.866e+02 2.222e+02 2.784e+02 4.852e+02, threshold=4.444e+02, percent-clipped=6.0 2023-03-26 04:51:58,289 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 04:52:07,160 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5994, 1.5993, 1.6733, 1.8411, 1.7123, 3.3463, 1.4819, 1.7151], device='cuda:6'), covar=tensor([0.0993, 0.1699, 0.1110, 0.0943, 0.1651, 0.0297, 0.1460, 0.1612], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0083, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:52:07,674 INFO [finetune.py:976] (6/7) Epoch 5, batch 3750, loss[loss=0.2691, simple_loss=0.32, pruned_loss=0.1091, over 4699.00 frames. ], tot_loss[loss=0.2234, simple_loss=0.2817, pruned_loss=0.08251, over 953389.23 frames. ], batch size: 23, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:52:45,028 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 04:53:00,961 INFO [finetune.py:976] (6/7) Epoch 5, batch 3800, loss[loss=0.2096, simple_loss=0.2774, pruned_loss=0.07091, over 4922.00 frames. ], tot_loss[loss=0.2253, simple_loss=0.2837, pruned_loss=0.08347, over 953421.02 frames. ], batch size: 42, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:53:08,704 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.710e+02 2.076e+02 2.649e+02 5.488e+02, threshold=4.152e+02, percent-clipped=2.0 2023-03-26 04:53:39,787 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5879, 1.5443, 1.9599, 1.8593, 1.7904, 3.9613, 1.3408, 1.7596], device='cuda:6'), covar=tensor([0.1003, 0.1793, 0.1348, 0.1060, 0.1617, 0.0196, 0.1542, 0.1678], device='cuda:6'), in_proj_covar=tensor([0.0078, 0.0082, 0.0078, 0.0080, 0.0093, 0.0083, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:53:57,419 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26758.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:53:59,695 INFO [finetune.py:976] (6/7) Epoch 5, batch 3850, loss[loss=0.2305, simple_loss=0.2772, pruned_loss=0.09187, over 4902.00 frames. ], tot_loss[loss=0.2231, simple_loss=0.2817, pruned_loss=0.08221, over 953011.53 frames. ], batch size: 35, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:54:11,281 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=26770.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:55:00,188 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26806.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:55:03,124 INFO [finetune.py:976] (6/7) Epoch 5, batch 3900, loss[loss=0.1895, simple_loss=0.2486, pruned_loss=0.06516, over 4902.00 frames. ], tot_loss[loss=0.2203, simple_loss=0.2784, pruned_loss=0.08114, over 953905.52 frames. ], batch size: 43, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:55:21,753 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.797e+02 2.146e+02 2.516e+02 4.177e+02, threshold=4.292e+02, percent-clipped=1.0 2023-03-26 04:55:22,214 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-26 04:55:22,487 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26824.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:55:57,247 INFO [finetune.py:976] (6/7) Epoch 5, batch 3950, loss[loss=0.201, simple_loss=0.2541, pruned_loss=0.07398, over 4747.00 frames. ], tot_loss[loss=0.2163, simple_loss=0.2743, pruned_loss=0.07914, over 954179.13 frames. ], batch size: 27, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:56:06,712 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26872.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:56:14,705 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9011, 1.6597, 2.3030, 1.5181, 1.9912, 2.2090, 1.5670, 2.3855], device='cuda:6'), covar=tensor([0.1581, 0.2119, 0.1585, 0.2390, 0.1076, 0.1838, 0.2879, 0.0979], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0204, 0.0201, 0.0196, 0.0185, 0.0223, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 04:56:29,740 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26904.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 04:56:30,933 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1550, 1.2376, 0.9198, 1.9398, 2.4352, 1.8050, 1.5460, 1.9551], device='cuda:6'), covar=tensor([0.1445, 0.2430, 0.2190, 0.1232, 0.1873, 0.2006, 0.1578, 0.2060], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0115, 0.0093, 0.0123, 0.0096, 0.0100, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 04:56:30,944 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=26906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:56:34,935 INFO [finetune.py:976] (6/7) Epoch 5, batch 4000, loss[loss=0.2599, simple_loss=0.3132, pruned_loss=0.1033, over 4760.00 frames. ], tot_loss[loss=0.2169, simple_loss=0.2743, pruned_loss=0.07974, over 953621.00 frames. ], batch size: 59, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:56:43,169 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.227e+02 1.662e+02 2.009e+02 2.453e+02 3.802e+02, threshold=4.018e+02, percent-clipped=0.0 2023-03-26 04:57:05,892 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=26950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:57:08,738 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=26954.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:57:18,754 INFO [finetune.py:976] (6/7) Epoch 5, batch 4050, loss[loss=0.2398, simple_loss=0.2795, pruned_loss=0.1001, over 4283.00 frames. ], tot_loss[loss=0.2212, simple_loss=0.2785, pruned_loss=0.08199, over 953149.89 frames. ], batch size: 65, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:57:18,871 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3209, 1.2395, 1.2776, 1.2847, 0.9068, 2.0637, 0.8227, 1.3303], device='cuda:6'), covar=tensor([0.3028, 0.2189, 0.1850, 0.2218, 0.1823, 0.0395, 0.3020, 0.1222], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0117, 0.0122, 0.0118, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 04:57:26,841 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=26965.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 04:57:50,345 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0752, 1.4787, 0.8347, 1.9869, 2.2787, 1.8556, 1.6707, 1.8333], device='cuda:6'), covar=tensor([0.1448, 0.2146, 0.2379, 0.1245, 0.1962, 0.2050, 0.1458, 0.2020], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0093, 0.0123, 0.0096, 0.0100, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 04:58:10,824 INFO [finetune.py:976] (6/7) Epoch 5, batch 4100, loss[loss=0.2712, simple_loss=0.3177, pruned_loss=0.1124, over 4917.00 frames. ], tot_loss[loss=0.2253, simple_loss=0.2826, pruned_loss=0.08403, over 954150.19 frames. ], batch size: 33, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:58:11,451 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27011.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:58:19,553 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.740e+02 2.111e+02 2.577e+02 5.326e+02, threshold=4.223e+02, percent-clipped=3.0 2023-03-26 04:58:58,625 INFO [finetune.py:976] (6/7) Epoch 5, batch 4150, loss[loss=0.296, simple_loss=0.3455, pruned_loss=0.1233, over 4805.00 frames. ], tot_loss[loss=0.2254, simple_loss=0.2827, pruned_loss=0.08407, over 952620.69 frames. ], batch size: 39, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:59:05,623 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27070.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:59:32,489 INFO [finetune.py:976] (6/7) Epoch 5, batch 4200, loss[loss=0.2264, simple_loss=0.3011, pruned_loss=0.07585, over 4899.00 frames. ], tot_loss[loss=0.2244, simple_loss=0.283, pruned_loss=0.08292, over 954163.23 frames. ], batch size: 37, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 04:59:37,724 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 04:59:41,615 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.682e+02 1.955e+02 2.295e+02 5.538e+02, threshold=3.911e+02, percent-clipped=3.0 2023-03-26 04:59:57,830 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=27148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:00:01,655 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 05:00:10,426 INFO [finetune.py:976] (6/7) Epoch 5, batch 4250, loss[loss=0.2032, simple_loss=0.2618, pruned_loss=0.07229, over 4772.00 frames. ], tot_loss[loss=0.221, simple_loss=0.2791, pruned_loss=0.08148, over 953734.21 frames. ], batch size: 54, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:00:39,586 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3701, 2.1755, 2.7917, 1.8140, 2.4347, 2.7637, 2.0194, 2.8956], device='cuda:6'), covar=tensor([0.1571, 0.2201, 0.1659, 0.2673, 0.0999, 0.1604, 0.2564, 0.1052], device='cuda:6'), in_proj_covar=tensor([0.0207, 0.0205, 0.0202, 0.0197, 0.0186, 0.0223, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:00:47,753 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.4289, 2.9377, 2.6777, 1.4831, 2.7871, 2.4124, 2.1912, 2.4002], device='cuda:6'), covar=tensor([0.0842, 0.0990, 0.1826, 0.2491, 0.1970, 0.2171, 0.2309, 0.1480], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0202, 0.0204, 0.0191, 0.0218, 0.0211, 0.0223, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:00:57,744 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2439, 2.1821, 1.6830, 2.4292, 2.3895, 1.9494, 2.8833, 2.2904], device='cuda:6'), covar=tensor([0.1732, 0.3572, 0.4166, 0.3777, 0.2937, 0.1941, 0.4323, 0.2281], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0193, 0.0237, 0.0254, 0.0230, 0.0189, 0.0211, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:00:58,535 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6841, 0.6111, 1.5758, 1.4398, 1.4262, 1.3583, 1.2502, 1.4677], device='cuda:6'), covar=tensor([0.4948, 0.6790, 0.5862, 0.6067, 0.6922, 0.4992, 0.7315, 0.5517], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0249, 0.0256, 0.0259, 0.0242, 0.0219, 0.0275, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:01:07,906 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8940, 1.3413, 1.7118, 1.7178, 1.5229, 1.5290, 1.6100, 1.6481], device='cuda:6'), covar=tensor([0.5372, 0.7021, 0.5961, 0.7049, 0.7779, 0.6204, 0.9057, 0.5721], device='cuda:6'), in_proj_covar=tensor([0.0230, 0.0249, 0.0256, 0.0259, 0.0242, 0.0219, 0.0275, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:01:08,500 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:01:09,574 INFO [finetune.py:976] (6/7) Epoch 5, batch 4300, loss[loss=0.2044, simple_loss=0.2639, pruned_loss=0.07241, over 4892.00 frames. ], tot_loss[loss=0.2179, simple_loss=0.2759, pruned_loss=0.0799, over 955347.63 frames. ], batch size: 32, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:01:26,940 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.762e+02 2.023e+02 2.453e+02 1.035e+03, threshold=4.046e+02, percent-clipped=2.0 2023-03-26 05:01:31,827 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6763, 1.5531, 1.9643, 1.2786, 1.6260, 1.8337, 1.5471, 2.0521], device='cuda:6'), covar=tensor([0.1220, 0.2116, 0.1150, 0.1690, 0.0952, 0.1285, 0.2598, 0.0823], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0204, 0.0202, 0.0197, 0.0186, 0.0222, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:01:59,786 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27260.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:02:00,300 INFO [finetune.py:976] (6/7) Epoch 5, batch 4350, loss[loss=0.2377, simple_loss=0.2893, pruned_loss=0.0931, over 4794.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.2735, pruned_loss=0.07874, over 955866.52 frames. ], batch size: 51, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:02:30,556 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27306.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:02:33,996 INFO [finetune.py:976] (6/7) Epoch 5, batch 4400, loss[loss=0.2215, simple_loss=0.2775, pruned_loss=0.08279, over 4774.00 frames. ], tot_loss[loss=0.2167, simple_loss=0.2745, pruned_loss=0.07939, over 956858.31 frames. ], batch size: 26, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:02:41,212 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.601e+02 1.888e+02 2.389e+02 3.644e+02, threshold=3.775e+02, percent-clipped=0.0 2023-03-26 05:02:50,064 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 05:02:54,111 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6653, 1.5637, 1.5585, 0.9077, 1.6404, 1.8628, 1.7971, 1.4130], device='cuda:6'), covar=tensor([0.0918, 0.0734, 0.0533, 0.0639, 0.0403, 0.0495, 0.0351, 0.0749], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0159, 0.0120, 0.0137, 0.0133, 0.0123, 0.0148, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.8148e-05, 1.1797e-04, 8.7253e-05, 9.9678e-05, 9.5841e-05, 9.1352e-05, 1.0988e-04, 1.0817e-04], device='cuda:6') 2023-03-26 05:03:07,516 INFO [finetune.py:976] (6/7) Epoch 5, batch 4450, loss[loss=0.2872, simple_loss=0.3412, pruned_loss=0.1167, over 4848.00 frames. ], tot_loss[loss=0.2219, simple_loss=0.2803, pruned_loss=0.08178, over 956854.88 frames. ], batch size: 49, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:03:17,305 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6239, 1.4786, 1.2510, 1.2193, 1.4840, 1.4194, 1.4013, 2.0120], device='cuda:6'), covar=tensor([0.6679, 0.6396, 0.5053, 0.6016, 0.5531, 0.3547, 0.5729, 0.2578], device='cuda:6'), in_proj_covar=tensor([0.0279, 0.0255, 0.0218, 0.0281, 0.0236, 0.0199, 0.0241, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:03:40,731 INFO [finetune.py:976] (6/7) Epoch 5, batch 4500, loss[loss=0.1642, simple_loss=0.2324, pruned_loss=0.04801, over 4750.00 frames. ], tot_loss[loss=0.2226, simple_loss=0.2815, pruned_loss=0.08184, over 956537.11 frames. ], batch size: 28, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:03:48,440 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.291e+02 1.723e+02 2.077e+02 2.543e+02 6.449e+02, threshold=4.154e+02, percent-clipped=4.0 2023-03-26 05:03:50,440 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2001, 2.1016, 1.9982, 2.2709, 2.8012, 2.2079, 1.8873, 1.6780], device='cuda:6'), covar=tensor([0.2366, 0.2187, 0.1945, 0.1751, 0.1961, 0.1200, 0.2650, 0.1915], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0210, 0.0202, 0.0185, 0.0237, 0.0177, 0.0216, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:04:06,574 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2323, 1.3007, 1.3872, 0.6253, 1.2445, 1.5522, 1.5096, 1.2732], device='cuda:6'), covar=tensor([0.0966, 0.0563, 0.0456, 0.0593, 0.0452, 0.0446, 0.0354, 0.0754], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0160, 0.0121, 0.0138, 0.0134, 0.0124, 0.0149, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.8980e-05, 1.1830e-04, 8.7824e-05, 1.0067e-04, 9.6862e-05, 9.1985e-05, 1.1086e-04, 1.0898e-04], device='cuda:6') 2023-03-26 05:04:14,225 INFO [finetune.py:976] (6/7) Epoch 5, batch 4550, loss[loss=0.2768, simple_loss=0.3164, pruned_loss=0.1185, over 4130.00 frames. ], tot_loss[loss=0.2236, simple_loss=0.2828, pruned_loss=0.0822, over 954594.79 frames. ], batch size: 66, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:04:24,211 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.93 vs. limit=5.0 2023-03-26 05:04:42,683 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=27504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:04:47,379 INFO [finetune.py:976] (6/7) Epoch 5, batch 4600, loss[loss=0.2743, simple_loss=0.3054, pruned_loss=0.1217, over 4340.00 frames. ], tot_loss[loss=0.2221, simple_loss=0.2813, pruned_loss=0.08148, over 955205.24 frames. ], batch size: 19, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:04:55,105 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.119e+02 1.596e+02 2.009e+02 2.524e+02 8.514e+02, threshold=4.018e+02, percent-clipped=5.0 2023-03-26 05:05:20,045 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27560.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 05:05:20,542 INFO [finetune.py:976] (6/7) Epoch 5, batch 4650, loss[loss=0.2613, simple_loss=0.3107, pruned_loss=0.1059, over 4838.00 frames. ], tot_loss[loss=0.2179, simple_loss=0.2771, pruned_loss=0.07935, over 955232.80 frames. ], batch size: 47, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:06:07,098 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:06:13,640 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27608.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:06:15,418 INFO [finetune.py:976] (6/7) Epoch 5, batch 4700, loss[loss=0.1857, simple_loss=0.2408, pruned_loss=0.06531, over 4874.00 frames. ], tot_loss[loss=0.2163, simple_loss=0.2746, pruned_loss=0.07903, over 954798.52 frames. ], batch size: 31, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:06:16,770 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8487, 0.8883, 1.5469, 1.4997, 1.3743, 1.4016, 1.3319, 1.5571], device='cuda:6'), covar=tensor([0.6618, 0.9202, 0.7629, 0.7968, 0.9082, 0.6800, 1.0506, 0.7113], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0247, 0.0255, 0.0257, 0.0241, 0.0218, 0.0275, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:06:27,633 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-26 05:06:27,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.549e+02 1.903e+02 2.293e+02 3.137e+02, threshold=3.806e+02, percent-clipped=0.0 2023-03-26 05:07:04,519 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6652, 1.4435, 1.4918, 1.5959, 1.1413, 3.5375, 1.3075, 1.7709], device='cuda:6'), covar=tensor([0.3339, 0.2370, 0.2159, 0.2389, 0.1943, 0.0171, 0.2742, 0.1425], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0117, 0.0122, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 05:07:05,722 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:07:14,261 INFO [finetune.py:976] (6/7) Epoch 5, batch 4750, loss[loss=0.2026, simple_loss=0.2633, pruned_loss=0.07096, over 4871.00 frames. ], tot_loss[loss=0.2159, simple_loss=0.2735, pruned_loss=0.07918, over 952687.52 frames. ], batch size: 31, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:07:56,929 INFO [finetune.py:976] (6/7) Epoch 5, batch 4800, loss[loss=0.2318, simple_loss=0.2886, pruned_loss=0.08752, over 4861.00 frames. ], tot_loss[loss=0.2182, simple_loss=0.2764, pruned_loss=0.08005, over 955160.14 frames. ], batch size: 34, lr: 3.93e-03, grad_scale: 16.0 2023-03-26 05:07:59,319 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=27714.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:08:04,701 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.708e+02 2.066e+02 2.363e+02 4.852e+02, threshold=4.133e+02, percent-clipped=3.0 2023-03-26 05:08:22,122 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3095, 2.9216, 3.0270, 3.2233, 3.0676, 2.8875, 3.3514, 0.9730], device='cuda:6'), covar=tensor([0.1149, 0.1046, 0.1145, 0.1193, 0.1705, 0.1778, 0.1064, 0.5171], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0245, 0.0278, 0.0294, 0.0338, 0.0286, 0.0306, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:08:30,288 INFO [finetune.py:976] (6/7) Epoch 5, batch 4850, loss[loss=0.2469, simple_loss=0.3161, pruned_loss=0.08888, over 4819.00 frames. ], tot_loss[loss=0.2202, simple_loss=0.2794, pruned_loss=0.08051, over 953926.70 frames. ], batch size: 39, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:08:38,326 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7312, 1.6604, 1.6218, 1.9441, 2.1322, 1.9228, 1.4811, 1.4748], device='cuda:6'), covar=tensor([0.2250, 0.2115, 0.1927, 0.1698, 0.1916, 0.1150, 0.2743, 0.1948], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0211, 0.0203, 0.0186, 0.0238, 0.0177, 0.0217, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:08:39,500 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=27775.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 05:08:46,384 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.5135, 4.7655, 5.0079, 5.3075, 5.2001, 4.9235, 5.6001, 1.5934], device='cuda:6'), covar=tensor([0.0595, 0.0856, 0.0654, 0.0766, 0.1120, 0.1444, 0.0513, 0.5513], device='cuda:6'), in_proj_covar=tensor([0.0358, 0.0245, 0.0278, 0.0294, 0.0338, 0.0286, 0.0307, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:08:59,140 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=27804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:09:04,310 INFO [finetune.py:976] (6/7) Epoch 5, batch 4900, loss[loss=0.2196, simple_loss=0.2741, pruned_loss=0.0825, over 4778.00 frames. ], tot_loss[loss=0.2216, simple_loss=0.2808, pruned_loss=0.08122, over 952905.19 frames. ], batch size: 26, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:09:12,047 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.126e+02 1.628e+02 1.864e+02 2.335e+02 3.818e+02, threshold=3.728e+02, percent-clipped=0.0 2023-03-26 05:09:30,674 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=27852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:09:37,157 INFO [finetune.py:976] (6/7) Epoch 5, batch 4950, loss[loss=0.256, simple_loss=0.3086, pruned_loss=0.1018, over 4889.00 frames. ], tot_loss[loss=0.2217, simple_loss=0.2812, pruned_loss=0.08105, over 952222.52 frames. ], batch size: 32, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:09:40,540 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 05:10:04,863 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9003, 1.3397, 1.7123, 1.6944, 1.4991, 1.4623, 1.6084, 1.6006], device='cuda:6'), covar=tensor([0.5510, 0.7821, 0.6125, 0.7260, 0.7979, 0.6280, 0.8623, 0.5836], device='cuda:6'), in_proj_covar=tensor([0.0228, 0.0248, 0.0255, 0.0257, 0.0241, 0.0218, 0.0274, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:10:10,463 INFO [finetune.py:976] (6/7) Epoch 5, batch 5000, loss[loss=0.1839, simple_loss=0.2429, pruned_loss=0.06242, over 4733.00 frames. ], tot_loss[loss=0.219, simple_loss=0.2786, pruned_loss=0.0797, over 953112.64 frames. ], batch size: 23, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:10:19,080 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.230e+02 1.618e+02 1.837e+02 2.301e+02 4.829e+02, threshold=3.674e+02, percent-clipped=1.0 2023-03-26 05:10:31,753 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 05:10:34,696 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0115, 1.8317, 1.5152, 1.7659, 1.7461, 1.6621, 1.7743, 2.4631], device='cuda:6'), covar=tensor([0.6130, 0.6571, 0.4961, 0.6634, 0.5821, 0.3591, 0.5990, 0.2400], device='cuda:6'), in_proj_covar=tensor([0.0280, 0.0256, 0.0218, 0.0282, 0.0238, 0.0200, 0.0243, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:10:43,564 INFO [finetune.py:976] (6/7) Epoch 5, batch 5050, loss[loss=0.1926, simple_loss=0.2434, pruned_loss=0.07088, over 4789.00 frames. ], tot_loss[loss=0.2164, simple_loss=0.2754, pruned_loss=0.0787, over 954804.16 frames. ], batch size: 26, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:11:48,642 INFO [finetune.py:976] (6/7) Epoch 5, batch 5100, loss[loss=0.1949, simple_loss=0.2607, pruned_loss=0.06457, over 4819.00 frames. ], tot_loss[loss=0.2136, simple_loss=0.2722, pruned_loss=0.07744, over 955771.41 frames. ], batch size: 40, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:12:02,950 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.639e+02 1.875e+02 2.408e+02 3.954e+02, threshold=3.749e+02, percent-clipped=2.0 2023-03-26 05:12:32,816 INFO [finetune.py:976] (6/7) Epoch 5, batch 5150, loss[loss=0.2447, simple_loss=0.3075, pruned_loss=0.0909, over 4720.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2732, pruned_loss=0.07811, over 954519.43 frames. ], batch size: 54, lr: 3.92e-03, grad_scale: 16.0 2023-03-26 05:12:32,954 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0930, 1.7654, 1.4109, 0.6192, 1.5482, 1.8058, 1.6279, 1.7035], device='cuda:6'), covar=tensor([0.0805, 0.0675, 0.1083, 0.1653, 0.1170, 0.1874, 0.1770, 0.0687], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0201, 0.0204, 0.0191, 0.0218, 0.0209, 0.0222, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:12:38,917 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28070.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:12:47,751 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6718, 1.3818, 2.1176, 3.0573, 2.1291, 2.1919, 1.1106, 2.4620], device='cuda:6'), covar=tensor([0.1688, 0.1545, 0.1138, 0.0556, 0.0841, 0.1515, 0.1682, 0.0564], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0118, 0.0136, 0.0165, 0.0103, 0.0142, 0.0127, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 05:13:06,308 INFO [finetune.py:976] (6/7) Epoch 5, batch 5200, loss[loss=0.1964, simple_loss=0.2497, pruned_loss=0.07153, over 3977.00 frames. ], tot_loss[loss=0.2193, simple_loss=0.2779, pruned_loss=0.08035, over 953189.12 frames. ], batch size: 17, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:13:12,676 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 05:13:14,538 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.308e+02 1.748e+02 1.996e+02 2.342e+02 5.311e+02, threshold=3.992e+02, percent-clipped=1.0 2023-03-26 05:13:20,232 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 05:13:23,506 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1488, 1.0770, 1.5218, 1.0840, 1.0769, 1.3054, 1.0736, 1.4354], device='cuda:6'), covar=tensor([0.1603, 0.2323, 0.1230, 0.1524, 0.1361, 0.1493, 0.2842, 0.1065], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0203, 0.0199, 0.0194, 0.0185, 0.0222, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:13:39,346 INFO [finetune.py:976] (6/7) Epoch 5, batch 5250, loss[loss=0.1882, simple_loss=0.2379, pruned_loss=0.06922, over 4746.00 frames. ], tot_loss[loss=0.2208, simple_loss=0.2797, pruned_loss=0.08098, over 953849.86 frames. ], batch size: 23, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:13:40,734 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8921, 1.6209, 1.4331, 1.5784, 1.6003, 1.5925, 1.5725, 2.3884], device='cuda:6'), covar=tensor([0.6736, 0.7272, 0.5341, 0.6384, 0.5973, 0.3671, 0.6501, 0.2431], device='cuda:6'), in_proj_covar=tensor([0.0282, 0.0257, 0.0220, 0.0283, 0.0239, 0.0202, 0.0244, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:14:12,316 INFO [finetune.py:976] (6/7) Epoch 5, batch 5300, loss[loss=0.2725, simple_loss=0.33, pruned_loss=0.1075, over 4891.00 frames. ], tot_loss[loss=0.2228, simple_loss=0.2819, pruned_loss=0.08178, over 956038.62 frames. ], batch size: 35, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:14:19,560 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.725e+02 1.957e+02 2.435e+02 6.444e+02, threshold=3.915e+02, percent-clipped=2.0 2023-03-26 05:14:24,327 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:14:51,801 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28255.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:14:56,276 INFO [finetune.py:976] (6/7) Epoch 5, batch 5350, loss[loss=0.2489, simple_loss=0.3046, pruned_loss=0.09665, over 4894.00 frames. ], tot_loss[loss=0.2219, simple_loss=0.2818, pruned_loss=0.081, over 956807.70 frames. ], batch size: 37, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:15:13,544 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28287.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:15:15,778 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:15:28,416 INFO [finetune.py:976] (6/7) Epoch 5, batch 5400, loss[loss=0.2039, simple_loss=0.2584, pruned_loss=0.07471, over 4749.00 frames. ], tot_loss[loss=0.2185, simple_loss=0.2781, pruned_loss=0.07949, over 956963.56 frames. ], batch size: 54, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:15:31,991 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28316.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 05:15:36,016 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.900e+01 1.568e+02 1.878e+02 2.260e+02 3.573e+02, threshold=3.756e+02, percent-clipped=0.0 2023-03-26 05:15:53,749 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:16:01,446 INFO [finetune.py:976] (6/7) Epoch 5, batch 5450, loss[loss=0.1565, simple_loss=0.2285, pruned_loss=0.04226, over 4921.00 frames. ], tot_loss[loss=0.2161, simple_loss=0.2751, pruned_loss=0.07854, over 953482.84 frames. ], batch size: 36, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:16:18,292 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28370.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:16:59,843 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2977, 1.6084, 0.9988, 2.2666, 2.7044, 1.9876, 1.9454, 2.2793], device='cuda:6'), covar=tensor([0.1463, 0.2027, 0.2179, 0.1167, 0.1729, 0.1709, 0.1450, 0.1960], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0098, 0.0116, 0.0094, 0.0124, 0.0097, 0.0100, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 05:17:02,777 INFO [finetune.py:976] (6/7) Epoch 5, batch 5500, loss[loss=0.2838, simple_loss=0.3298, pruned_loss=0.1189, over 4198.00 frames. ], tot_loss[loss=0.2118, simple_loss=0.2706, pruned_loss=0.0765, over 953523.98 frames. ], batch size: 65, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:17:12,778 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28418.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:17:21,151 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.651e+02 2.036e+02 2.478e+02 5.642e+02, threshold=4.072e+02, percent-clipped=3.0 2023-03-26 05:17:32,736 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 05:18:07,648 INFO [finetune.py:976] (6/7) Epoch 5, batch 5550, loss[loss=0.2269, simple_loss=0.2971, pruned_loss=0.07833, over 4810.00 frames. ], tot_loss[loss=0.2134, simple_loss=0.2726, pruned_loss=0.07717, over 954194.69 frames. ], batch size: 38, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:18:18,286 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 05:18:36,701 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:18:45,458 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:19:02,057 INFO [finetune.py:976] (6/7) Epoch 5, batch 5600, loss[loss=0.2378, simple_loss=0.303, pruned_loss=0.08635, over 4809.00 frames. ], tot_loss[loss=0.2161, simple_loss=0.2755, pruned_loss=0.0783, over 951262.06 frames. ], batch size: 41, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:19:14,572 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.753e+02 2.098e+02 2.591e+02 4.684e+02, threshold=4.196e+02, percent-clipped=2.0 2023-03-26 05:19:40,192 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:19:47,590 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28553.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:19:52,217 INFO [finetune.py:976] (6/7) Epoch 5, batch 5650, loss[loss=0.2214, simple_loss=0.2922, pruned_loss=0.07528, over 4934.00 frames. ], tot_loss[loss=0.2179, simple_loss=0.2781, pruned_loss=0.07882, over 952869.17 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:19:53,439 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.4215, 4.7737, 5.0034, 5.2878, 5.1383, 4.9086, 5.5175, 1.6193], device='cuda:6'), covar=tensor([0.0754, 0.0785, 0.0775, 0.1011, 0.1074, 0.1213, 0.0486, 0.5586], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0242, 0.0274, 0.0292, 0.0335, 0.0283, 0.0302, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:19:59,394 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7763, 1.6020, 1.5040, 1.7636, 2.1001, 1.8399, 1.2280, 1.4745], device='cuda:6'), covar=tensor([0.2375, 0.2259, 0.2175, 0.1911, 0.1795, 0.1206, 0.3006, 0.1983], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0209, 0.0202, 0.0186, 0.0237, 0.0176, 0.0214, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:20:02,301 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8452, 1.5571, 2.2042, 1.4475, 1.8199, 1.9544, 1.5485, 2.1536], device='cuda:6'), covar=tensor([0.1361, 0.2023, 0.1110, 0.1644, 0.1099, 0.1408, 0.2593, 0.0954], device='cuda:6'), in_proj_covar=tensor([0.0207, 0.0206, 0.0202, 0.0197, 0.0188, 0.0223, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:20:06,328 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:20:06,374 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0878, 1.9085, 1.8402, 2.1051, 1.6876, 4.8276, 1.7565, 2.5738], device='cuda:6'), covar=tensor([0.3236, 0.2348, 0.1961, 0.2090, 0.1650, 0.0095, 0.2371, 0.1187], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0114, 0.0118, 0.0121, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 05:20:21,893 INFO [finetune.py:976] (6/7) Epoch 5, batch 5700, loss[loss=0.2105, simple_loss=0.253, pruned_loss=0.08402, over 4612.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2751, pruned_loss=0.07818, over 935958.41 frames. ], batch size: 20, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:20:21,943 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28611.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:20:28,546 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5667, 2.2698, 2.1454, 2.3861, 2.5256, 2.2905, 2.8284, 2.5217], device='cuda:6'), covar=tensor([0.1475, 0.2812, 0.3321, 0.3038, 0.2358, 0.1630, 0.2640, 0.2039], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0192, 0.0235, 0.0253, 0.0229, 0.0189, 0.0211, 0.0189], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:20:29,013 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.580e+02 1.894e+02 2.210e+02 5.665e+02, threshold=3.789e+02, percent-clipped=1.0 2023-03-26 05:20:53,190 INFO [finetune.py:976] (6/7) Epoch 6, batch 0, loss[loss=0.162, simple_loss=0.2335, pruned_loss=0.04528, over 4770.00 frames. ], tot_loss[loss=0.162, simple_loss=0.2335, pruned_loss=0.04528, over 4770.00 frames. ], batch size: 26, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:20:53,190 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 05:20:56,564 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4426, 1.6095, 1.4998, 1.6260, 1.7797, 3.0628, 1.4664, 1.7436], device='cuda:6'), covar=tensor([0.1021, 0.1708, 0.1044, 0.1003, 0.1415, 0.0367, 0.1407, 0.1641], device='cuda:6'), in_proj_covar=tensor([0.0079, 0.0083, 0.0078, 0.0081, 0.0094, 0.0084, 0.0087, 0.0081], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 05:21:08,970 INFO [finetune.py:1010] (6/7) Epoch 6, validation: loss=0.1659, simple_loss=0.2379, pruned_loss=0.04693, over 2265189.00 frames. 2023-03-26 05:21:08,971 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6300MB 2023-03-26 05:21:15,235 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:21:15,283 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1909, 2.1972, 2.1138, 1.3871, 2.3653, 2.3266, 2.1911, 1.8180], device='cuda:6'), covar=tensor([0.0707, 0.0625, 0.0842, 0.1071, 0.0495, 0.0761, 0.0767, 0.1181], device='cuda:6'), in_proj_covar=tensor([0.0141, 0.0136, 0.0146, 0.0131, 0.0114, 0.0145, 0.0149, 0.0166], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:21:20,045 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3864, 2.7318, 2.2554, 1.6849, 2.6854, 2.7987, 2.5246, 2.1689], device='cuda:6'), covar=tensor([0.0733, 0.0597, 0.0875, 0.1059, 0.0578, 0.0683, 0.0735, 0.1097], device='cuda:6'), in_proj_covar=tensor([0.0141, 0.0136, 0.0146, 0.0131, 0.0114, 0.0145, 0.0148, 0.0166], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:21:23,701 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2160, 1.7560, 1.8346, 0.7952, 1.9701, 2.1440, 1.9475, 1.7957], device='cuda:6'), covar=tensor([0.1002, 0.0803, 0.0567, 0.0783, 0.0576, 0.0784, 0.0514, 0.0746], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0159, 0.0121, 0.0138, 0.0134, 0.0124, 0.0148, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.7884e-05, 1.1752e-04, 8.7878e-05, 1.0043e-04, 9.6153e-05, 9.1878e-05, 1.0956e-04, 1.0867e-04], device='cuda:6') 2023-03-26 05:21:59,713 INFO [finetune.py:976] (6/7) Epoch 6, batch 50, loss[loss=0.2073, simple_loss=0.2748, pruned_loss=0.06991, over 4904.00 frames. ], tot_loss[loss=0.2181, simple_loss=0.2788, pruned_loss=0.07867, over 217666.21 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:22:23,322 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7809, 2.7661, 2.4401, 2.3016, 2.8585, 2.9560, 2.8436, 2.6392], device='cuda:6'), covar=tensor([0.0169, 0.0240, 0.0302, 0.0276, 0.0182, 0.0335, 0.0225, 0.0269], device='cuda:6'), in_proj_covar=tensor([0.0087, 0.0111, 0.0138, 0.0118, 0.0104, 0.0101, 0.0091, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.8117e-05, 8.7615e-05, 1.1100e-04, 9.3341e-05, 8.2424e-05, 7.5017e-05, 6.9768e-05, 8.5209e-05], device='cuda:6') 2023-03-26 05:22:30,503 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.667e+02 1.978e+02 2.446e+02 6.098e+02, threshold=3.955e+02, percent-clipped=3.0 2023-03-26 05:22:41,775 INFO [finetune.py:976] (6/7) Epoch 6, batch 100, loss[loss=0.1749, simple_loss=0.2408, pruned_loss=0.0545, over 4755.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2723, pruned_loss=0.07786, over 380450.09 frames. ], batch size: 27, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:23:12,702 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-26 05:23:15,381 INFO [finetune.py:976] (6/7) Epoch 6, batch 150, loss[loss=0.2012, simple_loss=0.2458, pruned_loss=0.07835, over 4315.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.267, pruned_loss=0.07541, over 508358.22 frames. ], batch size: 65, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:23:37,616 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.593e+02 1.877e+02 2.260e+02 4.734e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-26 05:23:48,133 INFO [finetune.py:976] (6/7) Epoch 6, batch 200, loss[loss=0.2113, simple_loss=0.2805, pruned_loss=0.07108, over 4824.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2669, pruned_loss=0.07611, over 606910.55 frames. ], batch size: 40, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:23:50,546 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28842.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:23:55,125 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=28848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:24:00,979 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8957, 1.8674, 1.6484, 1.9886, 2.4078, 1.9786, 1.5304, 1.5382], device='cuda:6'), covar=tensor([0.2286, 0.2186, 0.1986, 0.1758, 0.1910, 0.1217, 0.2686, 0.1994], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0210, 0.0202, 0.0187, 0.0239, 0.0177, 0.0215, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:24:04,003 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=28861.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:24:19,124 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:24:26,391 INFO [finetune.py:976] (6/7) Epoch 6, batch 250, loss[loss=0.2281, simple_loss=0.2944, pruned_loss=0.08088, over 4817.00 frames. ], tot_loss[loss=0.2131, simple_loss=0.2707, pruned_loss=0.07781, over 685456.91 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:24:47,337 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28911.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:25:02,678 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=28922.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:25:03,152 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.192e+02 1.635e+02 1.950e+02 2.422e+02 4.878e+02, threshold=3.900e+02, percent-clipped=5.0 2023-03-26 05:25:09,279 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:25:13,901 INFO [finetune.py:976] (6/7) Epoch 6, batch 300, loss[loss=0.2356, simple_loss=0.2909, pruned_loss=0.09018, over 4921.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2736, pruned_loss=0.07767, over 744491.36 frames. ], batch size: 33, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:25:16,437 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=28943.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:25:28,665 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28959.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:25:43,876 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5456, 1.3155, 2.0837, 3.0718, 2.0789, 2.1438, 1.0502, 2.3313], device='cuda:6'), covar=tensor([0.1838, 0.1689, 0.1254, 0.0550, 0.0889, 0.1556, 0.1807, 0.0673], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0119, 0.0136, 0.0165, 0.0103, 0.0142, 0.0128, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 05:25:47,451 INFO [finetune.py:976] (6/7) Epoch 6, batch 350, loss[loss=0.1862, simple_loss=0.2616, pruned_loss=0.05545, over 4801.00 frames. ], tot_loss[loss=0.2177, simple_loss=0.2773, pruned_loss=0.07904, over 791082.64 frames. ], batch size: 29, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:25:49,479 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=28991.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:26:03,156 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29002.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:26:28,283 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.301e+02 1.836e+02 2.201e+02 2.620e+02 4.241e+02, threshold=4.402e+02, percent-clipped=2.0 2023-03-26 05:26:37,215 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 05:26:38,005 INFO [finetune.py:976] (6/7) Epoch 6, batch 400, loss[loss=0.192, simple_loss=0.2625, pruned_loss=0.06076, over 4764.00 frames. ], tot_loss[loss=0.217, simple_loss=0.2777, pruned_loss=0.07817, over 828702.44 frames. ], batch size: 28, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:26:55,787 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1536, 2.3068, 2.0932, 1.5071, 2.4872, 2.4808, 2.4231, 2.0218], device='cuda:6'), covar=tensor([0.0724, 0.0593, 0.0821, 0.1031, 0.0411, 0.0700, 0.0588, 0.0940], device='cuda:6'), in_proj_covar=tensor([0.0138, 0.0134, 0.0144, 0.0128, 0.0112, 0.0143, 0.0146, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:27:01,613 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29063.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:27:17,311 INFO [finetune.py:976] (6/7) Epoch 6, batch 450, loss[loss=0.2211, simple_loss=0.2922, pruned_loss=0.07502, over 4897.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2764, pruned_loss=0.07777, over 857176.29 frames. ], batch size: 32, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:27:45,345 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.708e+02 1.996e+02 2.284e+02 5.200e+02, threshold=3.993e+02, percent-clipped=1.0 2023-03-26 05:27:55,105 INFO [finetune.py:976] (6/7) Epoch 6, batch 500, loss[loss=0.2018, simple_loss=0.2658, pruned_loss=0.06891, over 4875.00 frames. ], tot_loss[loss=0.2131, simple_loss=0.2733, pruned_loss=0.0764, over 879062.06 frames. ], batch size: 31, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:27:57,524 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29142.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:28:01,657 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:28:28,350 INFO [finetune.py:976] (6/7) Epoch 6, batch 550, loss[loss=0.186, simple_loss=0.2486, pruned_loss=0.06165, over 4818.00 frames. ], tot_loss[loss=0.2108, simple_loss=0.2705, pruned_loss=0.07561, over 897313.76 frames. ], batch size: 38, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:28:30,767 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29190.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:28:34,890 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29196.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:29:00,592 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29217.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:29:04,678 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.638e+02 2.076e+02 2.525e+02 4.090e+02, threshold=4.153e+02, percent-clipped=1.0 2023-03-26 05:29:18,488 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:29:20,235 INFO [finetune.py:976] (6/7) Epoch 6, batch 600, loss[loss=0.2395, simple_loss=0.3023, pruned_loss=0.08838, over 4904.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2714, pruned_loss=0.07666, over 909270.06 frames. ], batch size: 36, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:30:24,096 INFO [finetune.py:976] (6/7) Epoch 6, batch 650, loss[loss=0.2964, simple_loss=0.3523, pruned_loss=0.1203, over 4791.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2755, pruned_loss=0.07823, over 917535.95 frames. ], batch size: 59, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:30:34,117 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:31:13,224 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.735e+02 1.982e+02 2.438e+02 4.583e+02, threshold=3.965e+02, percent-clipped=2.0 2023-03-26 05:31:33,679 INFO [finetune.py:976] (6/7) Epoch 6, batch 700, loss[loss=0.1797, simple_loss=0.2543, pruned_loss=0.05254, over 4742.00 frames. ], tot_loss[loss=0.2168, simple_loss=0.2769, pruned_loss=0.07838, over 924865.30 frames. ], batch size: 27, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:31:39,920 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7643, 1.6212, 1.5743, 1.7348, 1.4102, 3.8405, 1.6617, 2.1908], device='cuda:6'), covar=tensor([0.3520, 0.2449, 0.2157, 0.2427, 0.1862, 0.0152, 0.2501, 0.1269], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0117, 0.0121, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 05:31:44,864 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 05:31:46,291 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:32:17,024 INFO [finetune.py:976] (6/7) Epoch 6, batch 750, loss[loss=0.2357, simple_loss=0.2927, pruned_loss=0.0894, over 4933.00 frames. ], tot_loss[loss=0.2181, simple_loss=0.2781, pruned_loss=0.07898, over 931313.86 frames. ], batch size: 38, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:32:35,672 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6678, 1.6943, 1.7408, 0.9306, 1.7274, 1.9257, 1.8483, 1.5565], device='cuda:6'), covar=tensor([0.0881, 0.0640, 0.0391, 0.0623, 0.0400, 0.0501, 0.0401, 0.0682], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0160, 0.0122, 0.0138, 0.0134, 0.0125, 0.0149, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.9260e-05, 1.1851e-04, 8.8709e-05, 1.0097e-04, 9.6707e-05, 9.2235e-05, 1.1029e-04, 1.0897e-04], device='cuda:6') 2023-03-26 05:32:53,683 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3484, 2.8979, 2.7493, 1.2587, 2.9416, 2.1052, 0.7479, 1.8484], device='cuda:6'), covar=tensor([0.2646, 0.2332, 0.2003, 0.3770, 0.1502, 0.1238, 0.4405, 0.1796], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0172, 0.0164, 0.0129, 0.0157, 0.0122, 0.0146, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 05:32:59,103 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.656e+01 1.814e+02 2.109e+02 2.501e+02 5.044e+02, threshold=4.217e+02, percent-clipped=3.0 2023-03-26 05:33:15,875 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7745, 2.5009, 2.5108, 1.4398, 2.5105, 2.0663, 1.8462, 2.1349], device='cuda:6'), covar=tensor([0.0915, 0.0762, 0.1415, 0.1942, 0.1611, 0.2024, 0.2050, 0.1184], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0201, 0.0189, 0.0216, 0.0209, 0.0219, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:33:26,484 INFO [finetune.py:976] (6/7) Epoch 6, batch 800, loss[loss=0.1982, simple_loss=0.2711, pruned_loss=0.06266, over 4825.00 frames. ], tot_loss[loss=0.2166, simple_loss=0.2771, pruned_loss=0.07803, over 937284.22 frames. ], batch size: 47, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:34:10,554 INFO [finetune.py:976] (6/7) Epoch 6, batch 850, loss[loss=0.2146, simple_loss=0.2662, pruned_loss=0.08147, over 4759.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2746, pruned_loss=0.07669, over 940686.35 frames. ], batch size: 27, lr: 3.92e-03, grad_scale: 32.0 2023-03-26 05:34:43,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29517.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:34:52,273 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.241e+02 1.650e+02 2.021e+02 2.394e+02 5.702e+02, threshold=4.042e+02, percent-clipped=1.0 2023-03-26 05:35:14,736 INFO [finetune.py:976] (6/7) Epoch 6, batch 900, loss[loss=0.2327, simple_loss=0.2899, pruned_loss=0.08774, over 4765.00 frames. ], tot_loss[loss=0.212, simple_loss=0.2721, pruned_loss=0.07599, over 941489.17 frames. ], batch size: 26, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:35:32,187 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:35:46,204 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:36:14,404 INFO [finetune.py:976] (6/7) Epoch 6, batch 950, loss[loss=0.2155, simple_loss=0.2844, pruned_loss=0.07333, over 4796.00 frames. ], tot_loss[loss=0.2101, simple_loss=0.27, pruned_loss=0.07515, over 943437.02 frames. ], batch size: 29, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:36:21,385 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:36:43,981 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29612.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:36:56,223 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.661e+02 1.977e+02 2.359e+02 4.237e+02, threshold=3.954e+02, percent-clipped=2.0 2023-03-26 05:37:17,826 INFO [finetune.py:976] (6/7) Epoch 6, batch 1000, loss[loss=0.304, simple_loss=0.3634, pruned_loss=0.1223, over 4842.00 frames. ], tot_loss[loss=0.2147, simple_loss=0.2745, pruned_loss=0.07747, over 945181.26 frames. ], batch size: 49, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:37:23,681 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1000, 2.2815, 2.1375, 1.6452, 2.0383, 2.5179, 2.2265, 1.9260], device='cuda:6'), covar=tensor([0.0654, 0.0516, 0.0773, 0.0978, 0.1292, 0.0555, 0.0660, 0.0896], device='cuda:6'), in_proj_covar=tensor([0.0139, 0.0136, 0.0146, 0.0129, 0.0114, 0.0144, 0.0148, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:37:45,083 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29658.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:37:47,501 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29662.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:37:58,422 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.91 vs. limit=2.0 2023-03-26 05:38:20,624 INFO [finetune.py:976] (6/7) Epoch 6, batch 1050, loss[loss=0.232, simple_loss=0.3025, pruned_loss=0.08071, over 4918.00 frames. ], tot_loss[loss=0.2158, simple_loss=0.2764, pruned_loss=0.07766, over 948143.78 frames. ], batch size: 36, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:38:41,610 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29706.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:39:02,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.795e+02 2.095e+02 2.533e+02 7.754e+02, threshold=4.191e+02, percent-clipped=4.0 2023-03-26 05:39:07,856 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29723.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:39:23,373 INFO [finetune.py:976] (6/7) Epoch 6, batch 1100, loss[loss=0.2141, simple_loss=0.2744, pruned_loss=0.07697, over 4867.00 frames. ], tot_loss[loss=0.2188, simple_loss=0.2795, pruned_loss=0.07908, over 952102.21 frames. ], batch size: 31, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:39:51,241 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7119, 3.9739, 3.7093, 1.8933, 4.0798, 2.9306, 0.7511, 2.6040], device='cuda:6'), covar=tensor([0.2384, 0.1459, 0.1380, 0.3082, 0.0836, 0.1022, 0.4564, 0.1514], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0172, 0.0164, 0.0128, 0.0156, 0.0123, 0.0146, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 05:40:24,913 INFO [finetune.py:976] (6/7) Epoch 6, batch 1150, loss[loss=0.1853, simple_loss=0.2336, pruned_loss=0.06853, over 4716.00 frames. ], tot_loss[loss=0.2189, simple_loss=0.2794, pruned_loss=0.07923, over 953137.32 frames. ], batch size: 23, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:40:37,976 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=29802.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:41:01,070 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.742e+02 2.057e+02 2.377e+02 6.600e+02, threshold=4.115e+02, percent-clipped=1.0 2023-03-26 05:41:11,697 INFO [finetune.py:976] (6/7) Epoch 6, batch 1200, loss[loss=0.2386, simple_loss=0.303, pruned_loss=0.08715, over 4815.00 frames. ], tot_loss[loss=0.2182, simple_loss=0.2785, pruned_loss=0.079, over 953284.51 frames. ], batch size: 41, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:41:14,638 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2193, 2.7879, 2.6609, 1.3696, 2.7498, 2.3021, 2.0415, 2.3435], device='cuda:6'), covar=tensor([0.0868, 0.1209, 0.2068, 0.2745, 0.1836, 0.2360, 0.2343, 0.1538], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0201, 0.0190, 0.0216, 0.0208, 0.0219, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:41:28,770 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=29863.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:41:45,292 INFO [finetune.py:976] (6/7) Epoch 6, batch 1250, loss[loss=0.276, simple_loss=0.3176, pruned_loss=0.1172, over 4820.00 frames. ], tot_loss[loss=0.2166, simple_loss=0.2758, pruned_loss=0.07873, over 951698.53 frames. ], batch size: 40, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:41:47,187 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=29892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:41:57,709 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=29907.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:42:07,776 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.638e+02 1.953e+02 2.201e+02 4.150e+02, threshold=3.906e+02, percent-clipped=1.0 2023-03-26 05:42:18,518 INFO [finetune.py:976] (6/7) Epoch 6, batch 1300, loss[loss=0.187, simple_loss=0.2561, pruned_loss=0.05897, over 4931.00 frames. ], tot_loss[loss=0.2118, simple_loss=0.2713, pruned_loss=0.0761, over 953887.40 frames. ], batch size: 33, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:42:19,141 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=29940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:42:40,041 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8121, 1.6529, 1.5270, 1.8875, 2.3147, 1.7985, 1.4237, 1.5181], device='cuda:6'), covar=tensor([0.1892, 0.2051, 0.1638, 0.1511, 0.1634, 0.1173, 0.2628, 0.1681], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0210, 0.0203, 0.0187, 0.0239, 0.0177, 0.0215, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:42:53,738 INFO [finetune.py:976] (6/7) Epoch 6, batch 1350, loss[loss=0.1916, simple_loss=0.2572, pruned_loss=0.06295, over 4778.00 frames. ], tot_loss[loss=0.2123, simple_loss=0.2722, pruned_loss=0.07617, over 955706.29 frames. ], batch size: 26, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:43:22,438 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30018.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:43:25,361 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.660e+02 2.040e+02 2.486e+02 4.804e+02, threshold=4.081e+02, percent-clipped=2.0 2023-03-26 05:43:35,490 INFO [finetune.py:976] (6/7) Epoch 6, batch 1400, loss[loss=0.2225, simple_loss=0.2933, pruned_loss=0.07585, over 4911.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2761, pruned_loss=0.0779, over 956325.77 frames. ], batch size: 35, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:43:40,769 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 05:44:14,310 INFO [finetune.py:976] (6/7) Epoch 6, batch 1450, loss[loss=0.2019, simple_loss=0.2586, pruned_loss=0.07264, over 4770.00 frames. ], tot_loss[loss=0.2177, simple_loss=0.2779, pruned_loss=0.07869, over 956327.80 frames. ], batch size: 26, lr: 3.91e-03, grad_scale: 64.0 2023-03-26 05:44:34,163 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4842, 2.1307, 1.7489, 0.7242, 1.8732, 1.9473, 1.7295, 1.9069], device='cuda:6'), covar=tensor([0.0833, 0.0832, 0.1451, 0.2251, 0.1403, 0.2315, 0.2177, 0.0997], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0202, 0.0203, 0.0191, 0.0217, 0.0209, 0.0220, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:44:58,650 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.184e+02 1.825e+02 2.216e+02 2.642e+02 7.386e+02, threshold=4.431e+02, percent-clipped=2.0 2023-03-26 05:44:59,437 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30125.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:45:16,736 INFO [finetune.py:976] (6/7) Epoch 6, batch 1500, loss[loss=0.2126, simple_loss=0.2686, pruned_loss=0.07835, over 4784.00 frames. ], tot_loss[loss=0.2192, simple_loss=0.2795, pruned_loss=0.07948, over 956244.40 frames. ], batch size: 25, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:45:17,492 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9644, 1.8669, 1.4045, 1.7919, 1.7746, 1.7091, 1.7387, 2.4666], device='cuda:6'), covar=tensor([0.6245, 0.6088, 0.5150, 0.6378, 0.5648, 0.3664, 0.6084, 0.2532], device='cuda:6'), in_proj_covar=tensor([0.0282, 0.0257, 0.0219, 0.0284, 0.0239, 0.0201, 0.0245, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:45:38,717 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:45:59,668 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:01,367 INFO [finetune.py:976] (6/7) Epoch 6, batch 1550, loss[loss=0.1767, simple_loss=0.2368, pruned_loss=0.05833, over 4778.00 frames. ], tot_loss[loss=0.2195, simple_loss=0.2798, pruned_loss=0.0796, over 955324.78 frames. ], batch size: 27, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:46:12,997 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30207.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:13,958 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.68 vs. limit=5.0 2023-03-26 05:46:14,689 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:25,110 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.142e+02 1.698e+02 2.038e+02 2.458e+02 3.858e+02, threshold=4.076e+02, percent-clipped=0.0 2023-03-26 05:46:34,721 INFO [finetune.py:976] (6/7) Epoch 6, batch 1600, loss[loss=0.1717, simple_loss=0.2457, pruned_loss=0.04882, over 4755.00 frames. ], tot_loss[loss=0.2173, simple_loss=0.2769, pruned_loss=0.07883, over 954531.45 frames. ], batch size: 28, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:46:40,440 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 05:46:44,955 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30255.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:46:56,015 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30270.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:47:08,006 INFO [finetune.py:976] (6/7) Epoch 6, batch 1650, loss[loss=0.1872, simple_loss=0.2596, pruned_loss=0.05744, over 4824.00 frames. ], tot_loss[loss=0.2127, simple_loss=0.272, pruned_loss=0.07668, over 954240.52 frames. ], batch size: 39, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:47:27,133 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30318.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:47:31,616 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.710e+02 1.999e+02 2.408e+02 3.997e+02, threshold=3.998e+02, percent-clipped=0.0 2023-03-26 05:47:38,375 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7182, 1.5116, 1.6540, 1.6699, 1.1512, 3.3845, 1.2708, 1.8218], device='cuda:6'), covar=tensor([0.3458, 0.2646, 0.2026, 0.2432, 0.1999, 0.0223, 0.2784, 0.1362], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0114, 0.0118, 0.0122, 0.0118, 0.0099, 0.0102, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 05:47:41,359 INFO [finetune.py:976] (6/7) Epoch 6, batch 1700, loss[loss=0.2332, simple_loss=0.2862, pruned_loss=0.09008, over 4891.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2694, pruned_loss=0.07505, over 955489.41 frames. ], batch size: 32, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:47:42,827 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 05:47:43,926 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5939, 1.3558, 2.0595, 3.1698, 2.1817, 2.2639, 0.6533, 2.5121], device='cuda:6'), covar=tensor([0.1798, 0.1679, 0.1306, 0.0706, 0.0868, 0.1423, 0.2198, 0.0614], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0118, 0.0136, 0.0166, 0.0103, 0.0141, 0.0129, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 05:47:55,193 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:47:58,775 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30366.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:48:27,035 INFO [finetune.py:976] (6/7) Epoch 6, batch 1750, loss[loss=0.28, simple_loss=0.3277, pruned_loss=0.1161, over 4075.00 frames. ], tot_loss[loss=0.2134, simple_loss=0.2728, pruned_loss=0.07698, over 954006.38 frames. ], batch size: 65, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:48:48,754 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:48:50,962 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.220e+01 1.748e+02 2.265e+02 2.778e+02 4.150e+02, threshold=4.530e+02, percent-clipped=1.0 2023-03-26 05:49:00,663 INFO [finetune.py:976] (6/7) Epoch 6, batch 1800, loss[loss=0.2291, simple_loss=0.2863, pruned_loss=0.08599, over 4884.00 frames. ], tot_loss[loss=0.2172, simple_loss=0.2769, pruned_loss=0.07869, over 953798.35 frames. ], batch size: 32, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:49:13,233 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:49:20,235 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-26 05:49:29,000 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:49:33,833 INFO [finetune.py:976] (6/7) Epoch 6, batch 1850, loss[loss=0.2322, simple_loss=0.2936, pruned_loss=0.08533, over 4119.00 frames. ], tot_loss[loss=0.2185, simple_loss=0.2783, pruned_loss=0.07941, over 952735.78 frames. ], batch size: 65, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:49:44,728 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:50:03,761 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.671e+02 2.010e+02 2.577e+02 4.739e+02, threshold=4.020e+02, percent-clipped=1.0 2023-03-26 05:50:22,793 INFO [finetune.py:976] (6/7) Epoch 6, batch 1900, loss[loss=0.1917, simple_loss=0.2536, pruned_loss=0.06495, over 4903.00 frames. ], tot_loss[loss=0.219, simple_loss=0.2792, pruned_loss=0.07943, over 952668.08 frames. ], batch size: 36, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:50:30,851 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-26 05:50:31,869 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5676, 1.0790, 0.7437, 1.4418, 1.9494, 0.6824, 1.2794, 1.4369], device='cuda:6'), covar=tensor([0.1482, 0.2126, 0.1891, 0.1198, 0.1955, 0.2190, 0.1503, 0.1959], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0099, 0.0116, 0.0094, 0.0125, 0.0097, 0.0100, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 05:50:42,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3932, 1.3261, 1.9581, 2.8924, 2.0059, 2.0602, 1.0204, 2.3528], device='cuda:6'), covar=tensor([0.1947, 0.1609, 0.1224, 0.0676, 0.0863, 0.1482, 0.1802, 0.0681], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0117, 0.0135, 0.0164, 0.0102, 0.0140, 0.0127, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 05:50:52,226 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:51:16,288 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5677, 1.4828, 1.3570, 1.6498, 1.6224, 1.5652, 0.8625, 1.3299], device='cuda:6'), covar=tensor([0.2223, 0.2115, 0.1830, 0.1650, 0.1684, 0.1217, 0.2943, 0.1859], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0210, 0.0203, 0.0187, 0.0238, 0.0177, 0.0215, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:51:20,335 INFO [finetune.py:976] (6/7) Epoch 6, batch 1950, loss[loss=0.1929, simple_loss=0.2538, pruned_loss=0.06603, over 4771.00 frames. ], tot_loss[loss=0.2184, simple_loss=0.2785, pruned_loss=0.0792, over 952839.55 frames. ], batch size: 27, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:51:28,881 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7825, 1.5862, 1.6231, 1.6963, 1.1361, 3.8164, 1.4645, 2.1568], device='cuda:6'), covar=tensor([0.3298, 0.2461, 0.2135, 0.2394, 0.2024, 0.0144, 0.2640, 0.1278], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0121, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0005, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 05:51:56,512 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7936, 2.4867, 2.0909, 1.1957, 2.2322, 2.2733, 2.0344, 2.2473], device='cuda:6'), covar=tensor([0.0676, 0.0740, 0.1317, 0.1848, 0.1263, 0.1639, 0.1759, 0.0791], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0202, 0.0203, 0.0191, 0.0218, 0.0211, 0.0221, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:51:58,194 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.556e+02 1.883e+02 2.215e+02 4.222e+02, threshold=3.767e+02, percent-clipped=2.0 2023-03-26 05:52:06,489 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7213, 1.4611, 1.1349, 0.2633, 1.2681, 1.4763, 1.4864, 1.4159], device='cuda:6'), covar=tensor([0.0896, 0.0845, 0.1388, 0.2050, 0.1445, 0.2492, 0.2209, 0.0919], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0202, 0.0203, 0.0191, 0.0217, 0.0211, 0.0221, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:52:09,311 INFO [finetune.py:976] (6/7) Epoch 6, batch 2000, loss[loss=0.1817, simple_loss=0.2281, pruned_loss=0.06767, over 3952.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2752, pruned_loss=0.07809, over 952679.53 frames. ], batch size: 17, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:52:14,245 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8084, 1.6949, 2.0769, 1.5189, 1.9756, 1.9621, 1.6196, 2.2695], device='cuda:6'), covar=tensor([0.1505, 0.2203, 0.1332, 0.1957, 0.1003, 0.1638, 0.2726, 0.0809], device='cuda:6'), in_proj_covar=tensor([0.0204, 0.0203, 0.0197, 0.0193, 0.0181, 0.0219, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:52:19,631 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30655.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:52:39,232 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4741, 1.3935, 1.6676, 1.7399, 1.4852, 3.2600, 1.2721, 1.5480], device='cuda:6'), covar=tensor([0.0978, 0.1862, 0.1081, 0.1037, 0.1704, 0.0265, 0.1529, 0.1764], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 05:52:49,730 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1465, 2.0633, 1.7403, 1.5566, 2.2825, 2.2925, 2.0921, 1.9483], device='cuda:6'), covar=tensor([0.0319, 0.0338, 0.0544, 0.0434, 0.0276, 0.0461, 0.0366, 0.0360], device='cuda:6'), in_proj_covar=tensor([0.0086, 0.0111, 0.0137, 0.0116, 0.0103, 0.0100, 0.0090, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.7482e-05, 8.7208e-05, 1.0942e-04, 9.2036e-05, 8.1321e-05, 7.3872e-05, 6.8854e-05, 8.4549e-05], device='cuda:6') 2023-03-26 05:52:56,173 INFO [finetune.py:976] (6/7) Epoch 6, batch 2050, loss[loss=0.1566, simple_loss=0.22, pruned_loss=0.0466, over 4778.00 frames. ], tot_loss[loss=0.213, simple_loss=0.272, pruned_loss=0.07698, over 951689.28 frames. ], batch size: 27, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:53:14,341 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=30716.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:53:14,386 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30716.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 05:53:18,958 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0545, 1.7201, 2.4404, 3.5310, 2.6975, 2.6545, 1.2400, 2.7621], device='cuda:6'), covar=tensor([0.1674, 0.1516, 0.1253, 0.0587, 0.0679, 0.1331, 0.1902, 0.0706], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0118, 0.0135, 0.0166, 0.0102, 0.0142, 0.0128, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 05:53:20,057 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.687e+02 1.889e+02 2.318e+02 4.112e+02, threshold=3.779e+02, percent-clipped=3.0 2023-03-26 05:53:40,714 INFO [finetune.py:976] (6/7) Epoch 6, batch 2100, loss[loss=0.2144, simple_loss=0.2767, pruned_loss=0.07608, over 4761.00 frames. ], tot_loss[loss=0.2146, simple_loss=0.2733, pruned_loss=0.07791, over 950302.03 frames. ], batch size: 54, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:54:13,642 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:54:18,840 INFO [finetune.py:976] (6/7) Epoch 6, batch 2150, loss[loss=0.1801, simple_loss=0.2253, pruned_loss=0.06749, over 4386.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2743, pruned_loss=0.07793, over 951413.40 frames. ], batch size: 19, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:54:42,039 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.708e+02 2.011e+02 2.625e+02 4.679e+02, threshold=4.022e+02, percent-clipped=7.0 2023-03-26 05:54:45,043 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4434, 1.2548, 1.8858, 3.0143, 1.9561, 2.1572, 0.7410, 2.3732], device='cuda:6'), covar=tensor([0.2085, 0.1966, 0.1623, 0.0949, 0.1002, 0.2055, 0.2332, 0.0788], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0137, 0.0168, 0.0103, 0.0143, 0.0130, 0.0104], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 05:54:46,133 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30829.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:54:52,232 INFO [finetune.py:976] (6/7) Epoch 6, batch 2200, loss[loss=0.1579, simple_loss=0.2313, pruned_loss=0.04231, over 4840.00 frames. ], tot_loss[loss=0.2193, simple_loss=0.2785, pruned_loss=0.08002, over 950459.87 frames. ], batch size: 30, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:55:16,390 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=30865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:55:37,465 INFO [finetune.py:976] (6/7) Epoch 6, batch 2250, loss[loss=0.2188, simple_loss=0.2824, pruned_loss=0.07761, over 4728.00 frames. ], tot_loss[loss=0.2193, simple_loss=0.2793, pruned_loss=0.07969, over 953045.77 frames. ], batch size: 59, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:55:38,171 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3791, 3.4140, 3.1968, 1.2648, 3.4571, 2.5524, 0.7147, 2.1533], device='cuda:6'), covar=tensor([0.2324, 0.1514, 0.1450, 0.3182, 0.1024, 0.0956, 0.3917, 0.1442], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0169, 0.0162, 0.0127, 0.0155, 0.0122, 0.0144, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 05:55:58,615 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30903.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:10,383 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=30913.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:22,248 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.611e+02 1.958e+02 2.302e+02 5.232e+02, threshold=3.915e+02, percent-clipped=2.0 2023-03-26 05:56:23,000 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:30,801 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9079, 1.6741, 2.3645, 1.5613, 2.2121, 2.0461, 1.6125, 2.3938], device='cuda:6'), covar=tensor([0.1592, 0.2336, 0.1726, 0.2433, 0.1012, 0.1816, 0.2974, 0.0915], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0204, 0.0197, 0.0193, 0.0181, 0.0219, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:56:34,301 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=30934.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:56:42,904 INFO [finetune.py:976] (6/7) Epoch 6, batch 2300, loss[loss=0.1861, simple_loss=0.2565, pruned_loss=0.05784, over 4900.00 frames. ], tot_loss[loss=0.2181, simple_loss=0.2785, pruned_loss=0.07885, over 953165.22 frames. ], batch size: 35, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:57:16,230 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:57:46,794 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30986.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:57:49,010 INFO [finetune.py:976] (6/7) Epoch 6, batch 2350, loss[loss=0.2633, simple_loss=0.3073, pruned_loss=0.1097, over 4906.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2765, pruned_loss=0.07819, over 951549.41 frames. ], batch size: 36, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:57:57,509 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=30995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:58:19,822 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31011.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:58:19,850 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31011.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 05:58:28,258 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:58:33,048 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.621e+02 1.904e+02 2.250e+02 3.149e+02, threshold=3.808e+02, percent-clipped=0.0 2023-03-26 05:58:53,274 INFO [finetune.py:976] (6/7) Epoch 6, batch 2400, loss[loss=0.1953, simple_loss=0.2617, pruned_loss=0.06448, over 4892.00 frames. ], tot_loss[loss=0.2127, simple_loss=0.2724, pruned_loss=0.0765, over 952832.89 frames. ], batch size: 35, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:59:02,881 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 05:59:07,849 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9829, 2.7139, 2.7991, 2.7879, 2.6602, 2.6040, 3.0590, 1.1941], device='cuda:6'), covar=tensor([0.1679, 0.1894, 0.1770, 0.1980, 0.2479, 0.2376, 0.1870, 0.5839], device='cuda:6'), in_proj_covar=tensor([0.0356, 0.0243, 0.0276, 0.0292, 0.0332, 0.0283, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 05:59:17,404 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31064.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 05:59:24,047 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31072.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 05:59:34,772 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1281, 1.2854, 0.7951, 1.9395, 2.3709, 1.8177, 1.7927, 1.8833], device='cuda:6'), covar=tensor([0.1371, 0.2069, 0.2168, 0.1134, 0.1836, 0.1982, 0.1261, 0.1874], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0115, 0.0093, 0.0124, 0.0097, 0.0100, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 05:59:37,019 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-26 05:59:44,204 INFO [finetune.py:976] (6/7) Epoch 6, batch 2450, loss[loss=0.2545, simple_loss=0.3015, pruned_loss=0.1037, over 4926.00 frames. ], tot_loss[loss=0.2107, simple_loss=0.2697, pruned_loss=0.07585, over 951171.28 frames. ], batch size: 38, lr: 3.91e-03, grad_scale: 32.0 2023-03-26 05:59:44,971 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.55 vs. limit=5.0 2023-03-26 06:00:03,374 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8044, 0.9965, 1.6613, 1.5918, 1.4241, 1.4345, 1.4046, 1.4867], device='cuda:6'), covar=tensor([0.4203, 0.5848, 0.4782, 0.5166, 0.6147, 0.4683, 0.6549, 0.4523], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0246, 0.0254, 0.0256, 0.0242, 0.0218, 0.0273, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:00:14,916 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6112, 1.5043, 1.3616, 1.6061, 2.1141, 1.6657, 1.2502, 1.2887], device='cuda:6'), covar=tensor([0.2464, 0.2350, 0.2070, 0.1894, 0.1865, 0.1343, 0.2965, 0.2031], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0210, 0.0204, 0.0187, 0.0238, 0.0176, 0.0214, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:00:21,949 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.316e+02 1.762e+02 2.221e+02 2.552e+02 5.044e+02, threshold=4.442e+02, percent-clipped=4.0 2023-03-26 06:00:30,922 INFO [finetune.py:976] (6/7) Epoch 6, batch 2500, loss[loss=0.1523, simple_loss=0.2305, pruned_loss=0.03708, over 4754.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2719, pruned_loss=0.07739, over 950227.23 frames. ], batch size: 27, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:00:57,821 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.66 vs. limit=5.0 2023-03-26 06:01:06,651 INFO [finetune.py:976] (6/7) Epoch 6, batch 2550, loss[loss=0.2184, simple_loss=0.2747, pruned_loss=0.08102, over 4900.00 frames. ], tot_loss[loss=0.2171, simple_loss=0.2764, pruned_loss=0.07889, over 952246.59 frames. ], batch size: 32, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:01:17,980 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7396, 3.8604, 3.6826, 2.1250, 3.9327, 2.9225, 0.7756, 2.8115], device='cuda:6'), covar=tensor([0.2605, 0.1798, 0.1517, 0.2963, 0.0962, 0.0991, 0.4506, 0.1446], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0167, 0.0160, 0.0125, 0.0153, 0.0120, 0.0142, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 06:01:28,423 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0871, 1.9164, 1.8177, 2.0977, 2.5492, 2.0402, 1.7991, 1.5416], device='cuda:6'), covar=tensor([0.2222, 0.2267, 0.1937, 0.1769, 0.1963, 0.1217, 0.2456, 0.1944], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0211, 0.0204, 0.0187, 0.0238, 0.0177, 0.0214, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:01:50,622 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.680e+02 2.034e+02 2.315e+02 3.655e+02, threshold=4.067e+02, percent-clipped=0.0 2023-03-26 06:02:04,208 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3001, 1.2628, 1.3683, 1.5087, 1.3878, 2.9438, 1.1321, 1.4936], device='cuda:6'), covar=tensor([0.1065, 0.1886, 0.1247, 0.1057, 0.1734, 0.0291, 0.1636, 0.1729], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0082, 0.0077, 0.0079, 0.0093, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 06:02:04,702 INFO [finetune.py:976] (6/7) Epoch 6, batch 2600, loss[loss=0.2398, simple_loss=0.3034, pruned_loss=0.08814, over 4849.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2767, pruned_loss=0.07813, over 951905.12 frames. ], batch size: 44, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:02:05,442 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8083, 2.4397, 2.0556, 1.2086, 2.2149, 2.2407, 1.9766, 2.2182], device='cuda:6'), covar=tensor([0.0722, 0.0807, 0.1278, 0.1908, 0.1204, 0.1692, 0.1767, 0.0816], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0201, 0.0201, 0.0189, 0.0216, 0.0208, 0.0220, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:02:33,425 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31259.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:02:55,613 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1695, 1.8627, 2.6683, 1.6964, 2.4936, 2.4599, 1.7624, 2.6424], device='cuda:6'), covar=tensor([0.1511, 0.2089, 0.1529, 0.2246, 0.0829, 0.1431, 0.2780, 0.0774], device='cuda:6'), in_proj_covar=tensor([0.0208, 0.0208, 0.0201, 0.0197, 0.0185, 0.0223, 0.0220, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:03:04,532 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:03:09,261 INFO [finetune.py:976] (6/7) Epoch 6, batch 2650, loss[loss=0.2062, simple_loss=0.2699, pruned_loss=0.07127, over 4828.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2764, pruned_loss=0.07748, over 952332.06 frames. ], batch size: 30, lr: 3.91e-03, grad_scale: 16.0 2023-03-26 06:03:15,217 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:03:38,134 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31311.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:03:47,926 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.761e+02 2.106e+02 2.462e+02 3.966e+02, threshold=4.213e+02, percent-clipped=0.0 2023-03-26 06:03:55,221 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6848, 0.6219, 1.5190, 1.3341, 1.3323, 1.3217, 1.2282, 1.4581], device='cuda:6'), covar=tensor([0.5456, 0.7162, 0.6039, 0.6496, 0.7281, 0.5780, 0.8132, 0.5601], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0245, 0.0253, 0.0256, 0.0241, 0.0218, 0.0273, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:03:59,090 INFO [finetune.py:976] (6/7) Epoch 6, batch 2700, loss[loss=0.1695, simple_loss=0.2297, pruned_loss=0.05469, over 4723.00 frames. ], tot_loss[loss=0.2154, simple_loss=0.276, pruned_loss=0.07737, over 952720.32 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:04:11,203 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 06:04:23,338 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31359.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:04:33,877 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31367.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 06:05:04,058 INFO [finetune.py:976] (6/7) Epoch 6, batch 2750, loss[loss=0.2003, simple_loss=0.2592, pruned_loss=0.07066, over 4909.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2734, pruned_loss=0.07654, over 953569.90 frames. ], batch size: 43, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:05:33,001 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31409.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:05:46,888 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.636e+02 1.989e+02 2.265e+02 3.886e+02, threshold=3.977e+02, percent-clipped=0.0 2023-03-26 06:05:48,449 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 06:05:56,332 INFO [finetune.py:976] (6/7) Epoch 6, batch 2800, loss[loss=0.2006, simple_loss=0.2649, pruned_loss=0.06809, over 4705.00 frames. ], tot_loss[loss=0.2095, simple_loss=0.2695, pruned_loss=0.0748, over 953644.87 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:05:57,695 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6540, 1.4255, 1.3635, 1.6644, 2.0880, 1.7014, 1.1706, 1.3741], device='cuda:6'), covar=tensor([0.2449, 0.2522, 0.2262, 0.1876, 0.1757, 0.1378, 0.2954, 0.2180], device='cuda:6'), in_proj_covar=tensor([0.0234, 0.0208, 0.0202, 0.0186, 0.0236, 0.0176, 0.0213, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:06:17,011 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:06:42,529 INFO [finetune.py:976] (6/7) Epoch 6, batch 2850, loss[loss=0.2418, simple_loss=0.304, pruned_loss=0.08977, over 4812.00 frames. ], tot_loss[loss=0.2105, simple_loss=0.2697, pruned_loss=0.07568, over 952899.67 frames. ], batch size: 39, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:06:43,220 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3205, 2.8910, 2.7913, 1.2399, 2.9666, 2.2223, 0.6646, 1.9027], device='cuda:6'), covar=tensor([0.2701, 0.2419, 0.1844, 0.3640, 0.1362, 0.1161, 0.4345, 0.1809], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0171, 0.0163, 0.0128, 0.0156, 0.0122, 0.0145, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 06:06:43,856 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31491.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 06:07:26,132 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.153e+02 1.576e+02 1.974e+02 2.520e+02 5.037e+02, threshold=3.948e+02, percent-clipped=2.0 2023-03-26 06:07:40,824 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0056, 1.8884, 2.3757, 1.5495, 2.0521, 2.3371, 1.8215, 2.5238], device='cuda:6'), covar=tensor([0.1413, 0.1982, 0.1407, 0.2108, 0.1092, 0.1613, 0.2460, 0.0807], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0205, 0.0199, 0.0195, 0.0183, 0.0220, 0.0217, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:07:42,539 INFO [finetune.py:976] (6/7) Epoch 6, batch 2900, loss[loss=0.2279, simple_loss=0.2915, pruned_loss=0.08216, over 4801.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.2707, pruned_loss=0.0753, over 952656.49 frames. ], batch size: 51, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:07:59,545 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31552.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 06:08:00,121 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9183, 1.4635, 0.7938, 1.7139, 2.1955, 1.5056, 1.8106, 1.7515], device='cuda:6'), covar=tensor([0.1549, 0.2101, 0.2433, 0.1288, 0.2032, 0.2095, 0.1471, 0.2036], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0099, 0.0115, 0.0093, 0.0125, 0.0097, 0.0101, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 06:08:04,307 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31559.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:06,810 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 06:08:19,185 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31581.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:23,937 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2947, 1.3980, 1.5398, 0.7921, 1.4151, 1.7041, 1.7197, 1.3778], device='cuda:6'), covar=tensor([0.1023, 0.0623, 0.0424, 0.0636, 0.0454, 0.0561, 0.0331, 0.0612], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0159, 0.0121, 0.0139, 0.0134, 0.0125, 0.0148, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.8722e-05, 1.1771e-04, 8.7926e-05, 1.0126e-04, 9.5976e-05, 9.2656e-05, 1.0969e-04, 1.0778e-04], device='cuda:6') 2023-03-26 06:08:27,344 INFO [finetune.py:976] (6/7) Epoch 6, batch 2950, loss[loss=0.2288, simple_loss=0.2899, pruned_loss=0.08386, over 4772.00 frames. ], tot_loss[loss=0.2152, simple_loss=0.2757, pruned_loss=0.07735, over 950478.51 frames. ], batch size: 54, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:08:33,443 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31590.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:46,807 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 06:08:47,783 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31607.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:08:59,952 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.780e+02 2.105e+02 2.565e+02 4.082e+02, threshold=4.210e+02, percent-clipped=1.0 2023-03-26 06:09:02,951 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:09:09,346 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31638.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:09:09,900 INFO [finetune.py:976] (6/7) Epoch 6, batch 3000, loss[loss=0.2512, simple_loss=0.3071, pruned_loss=0.09766, over 4837.00 frames. ], tot_loss[loss=0.2158, simple_loss=0.2764, pruned_loss=0.0776, over 950161.43 frames. ], batch size: 49, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:09:09,900 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 06:09:23,489 INFO [finetune.py:1010] (6/7) Epoch 6, validation: loss=0.1625, simple_loss=0.2344, pruned_loss=0.04534, over 2265189.00 frames. 2023-03-26 06:09:23,490 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6300MB 2023-03-26 06:09:55,826 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=31667.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:10:19,288 INFO [finetune.py:976] (6/7) Epoch 6, batch 3050, loss[loss=0.2009, simple_loss=0.2648, pruned_loss=0.06848, over 4748.00 frames. ], tot_loss[loss=0.2174, simple_loss=0.2781, pruned_loss=0.07835, over 952657.78 frames. ], batch size: 26, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:10:28,755 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6389, 1.6487, 1.5288, 1.8144, 1.8480, 1.7503, 1.0715, 1.3985], device='cuda:6'), covar=tensor([0.2549, 0.2253, 0.1995, 0.1779, 0.2202, 0.1300, 0.3168, 0.2138], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0209, 0.0202, 0.0186, 0.0236, 0.0176, 0.0213, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:10:29,947 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31704.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:10:33,661 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9893, 1.8574, 1.5886, 1.8658, 2.0438, 1.6933, 2.2646, 1.9756], device='cuda:6'), covar=tensor([0.1639, 0.2878, 0.3904, 0.3285, 0.2916, 0.1899, 0.3795, 0.2306], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0237, 0.0254, 0.0232, 0.0191, 0.0211, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:10:36,654 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=31715.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:10:43,060 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.708e+02 2.072e+02 2.420e+02 4.871e+02, threshold=4.144e+02, percent-clipped=1.0 2023-03-26 06:10:59,685 INFO [finetune.py:976] (6/7) Epoch 6, batch 3100, loss[loss=0.1838, simple_loss=0.2525, pruned_loss=0.05754, over 4899.00 frames. ], tot_loss[loss=0.2144, simple_loss=0.2753, pruned_loss=0.07678, over 953501.35 frames. ], batch size: 36, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:11:20,241 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:11:20,284 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:11:35,784 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31788.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:11:36,283 INFO [finetune.py:976] (6/7) Epoch 6, batch 3150, loss[loss=0.237, simple_loss=0.2855, pruned_loss=0.09422, over 4840.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2723, pruned_loss=0.07629, over 953399.13 frames. ], batch size: 30, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:12:00,899 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31823.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:12:01,992 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.748e+01 1.682e+02 2.020e+02 2.535e+02 4.605e+02, threshold=4.040e+02, percent-clipped=1.0 2023-03-26 06:12:15,953 INFO [finetune.py:976] (6/7) Epoch 6, batch 3200, loss[loss=0.245, simple_loss=0.2963, pruned_loss=0.09687, over 4916.00 frames. ], tot_loss[loss=0.2103, simple_loss=0.2693, pruned_loss=0.07563, over 953809.44 frames. ], batch size: 36, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:12:25,653 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=31847.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 06:12:32,396 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:13:06,113 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:13:11,187 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31884.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:13:14,083 INFO [finetune.py:976] (6/7) Epoch 6, batch 3250, loss[loss=0.2612, simple_loss=0.3181, pruned_loss=0.1022, over 4908.00 frames. ], tot_loss[loss=0.2121, simple_loss=0.2712, pruned_loss=0.07645, over 952773.06 frames. ], batch size: 37, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:13:51,969 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31915.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:01,797 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.169e+02 1.695e+02 2.048e+02 2.376e+02 4.231e+02, threshold=4.096e+02, percent-clipped=1.0 2023-03-26 06:14:17,633 INFO [finetune.py:976] (6/7) Epoch 6, batch 3300, loss[loss=0.2183, simple_loss=0.2838, pruned_loss=0.07643, over 4798.00 frames. ], tot_loss[loss=0.2135, simple_loss=0.2738, pruned_loss=0.07662, over 952494.58 frames. ], batch size: 45, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:14:17,767 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:29,120 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=31950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:43,825 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-26 06:14:46,574 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=31976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:14:54,839 INFO [finetune.py:976] (6/7) Epoch 6, batch 3350, loss[loss=0.1895, simple_loss=0.2616, pruned_loss=0.05869, over 4792.00 frames. ], tot_loss[loss=0.2152, simple_loss=0.2764, pruned_loss=0.07698, over 953839.82 frames. ], batch size: 51, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:15:04,920 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 06:15:10,299 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0223, 1.7993, 1.6195, 1.8357, 2.0551, 1.6851, 2.2678, 1.9515], device='cuda:6'), covar=tensor([0.1525, 0.2877, 0.3659, 0.3054, 0.2757, 0.1895, 0.3700, 0.2146], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0192, 0.0235, 0.0253, 0.0231, 0.0190, 0.0211, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:15:11,498 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=32011.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:15:15,052 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 06:15:22,687 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.186e+02 1.743e+02 2.097e+02 2.540e+02 4.089e+02, threshold=4.194e+02, percent-clipped=0.0 2023-03-26 06:15:41,803 INFO [finetune.py:976] (6/7) Epoch 6, batch 3400, loss[loss=0.2268, simple_loss=0.2865, pruned_loss=0.08353, over 4828.00 frames. ], tot_loss[loss=0.2159, simple_loss=0.277, pruned_loss=0.07741, over 951343.47 frames. ], batch size: 47, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:16:09,782 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32060.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:16:13,835 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:16:31,761 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 06:16:40,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0784, 2.0884, 2.4174, 1.0881, 2.5130, 2.7052, 2.2324, 2.0815], device='cuda:6'), covar=tensor([0.1065, 0.0935, 0.0421, 0.0767, 0.0485, 0.0527, 0.0423, 0.0596], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0157, 0.0121, 0.0138, 0.0133, 0.0125, 0.0147, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.7822e-05, 1.1639e-04, 8.7499e-05, 1.0052e-04, 9.5729e-05, 9.2226e-05, 1.0907e-04, 1.0719e-04], device='cuda:6') 2023-03-26 06:16:41,401 INFO [finetune.py:976] (6/7) Epoch 6, batch 3450, loss[loss=0.2141, simple_loss=0.2641, pruned_loss=0.08206, over 4836.00 frames. ], tot_loss[loss=0.2137, simple_loss=0.2754, pruned_loss=0.07599, over 951769.99 frames. ], batch size: 30, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:17:08,248 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:17:16,937 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.573e+02 1.926e+02 2.516e+02 4.351e+02, threshold=3.853e+02, percent-clipped=2.0 2023-03-26 06:17:17,666 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0550, 1.0141, 0.9888, 0.3863, 0.8037, 1.1792, 1.2364, 0.9914], device='cuda:6'), covar=tensor([0.0992, 0.0576, 0.0492, 0.0602, 0.0571, 0.0608, 0.0397, 0.0611], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0158, 0.0121, 0.0138, 0.0133, 0.0125, 0.0147, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.7993e-05, 1.1649e-04, 8.7725e-05, 1.0064e-04, 9.5687e-05, 9.2425e-05, 1.0912e-04, 1.0734e-04], device='cuda:6') 2023-03-26 06:17:25,472 INFO [finetune.py:976] (6/7) Epoch 6, batch 3500, loss[loss=0.2184, simple_loss=0.2602, pruned_loss=0.08832, over 4728.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.2717, pruned_loss=0.07472, over 952026.71 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:17:29,080 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:17:29,135 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9399, 1.7558, 1.6403, 1.6510, 1.9531, 1.6585, 2.1388, 1.9238], device='cuda:6'), covar=tensor([0.1571, 0.2680, 0.3355, 0.2660, 0.2499, 0.1649, 0.3145, 0.1968], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0235, 0.0253, 0.0231, 0.0191, 0.0211, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:17:30,913 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32147.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 06:18:09,974 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32179.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:18:15,990 INFO [finetune.py:976] (6/7) Epoch 6, batch 3550, loss[loss=0.2176, simple_loss=0.275, pruned_loss=0.0801, over 4837.00 frames. ], tot_loss[loss=0.2097, simple_loss=0.2699, pruned_loss=0.07476, over 953215.98 frames. ], batch size: 30, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:18:19,694 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32195.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:18:40,463 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.617e+02 1.889e+02 2.318e+02 4.823e+02, threshold=3.777e+02, percent-clipped=2.0 2023-03-26 06:18:52,032 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:18:58,895 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5100, 1.6034, 1.6335, 0.9147, 1.6564, 1.8767, 1.9228, 1.5062], device='cuda:6'), covar=tensor([0.1148, 0.0792, 0.0466, 0.0730, 0.0449, 0.0698, 0.0323, 0.0745], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0156, 0.0120, 0.0137, 0.0132, 0.0123, 0.0146, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.7072e-05, 1.1553e-04, 8.7147e-05, 9.9561e-05, 9.4803e-05, 9.1166e-05, 1.0798e-04, 1.0642e-04], device='cuda:6') 2023-03-26 06:19:00,024 INFO [finetune.py:976] (6/7) Epoch 6, batch 3600, loss[loss=0.207, simple_loss=0.2609, pruned_loss=0.0766, over 4813.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2678, pruned_loss=0.07423, over 955593.58 frames. ], batch size: 38, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:19:36,023 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32271.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:19:57,776 INFO [finetune.py:976] (6/7) Epoch 6, batch 3650, loss[loss=0.2416, simple_loss=0.3049, pruned_loss=0.08913, over 4903.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.271, pruned_loss=0.07563, over 955399.70 frames. ], batch size: 35, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:20:14,273 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=32306.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:20:18,585 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3374, 3.7353, 3.9196, 4.2499, 4.0803, 3.8386, 4.4528, 1.4112], device='cuda:6'), covar=tensor([0.0783, 0.0828, 0.0723, 0.0895, 0.1223, 0.1480, 0.0608, 0.5417], device='cuda:6'), in_proj_covar=tensor([0.0355, 0.0243, 0.0276, 0.0294, 0.0334, 0.0284, 0.0302, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:20:26,743 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.776e+02 2.129e+02 2.587e+02 5.126e+02, threshold=4.259e+02, percent-clipped=5.0 2023-03-26 06:20:43,050 INFO [finetune.py:976] (6/7) Epoch 6, batch 3700, loss[loss=0.2352, simple_loss=0.2973, pruned_loss=0.08654, over 4803.00 frames. ], tot_loss[loss=0.2155, simple_loss=0.276, pruned_loss=0.0775, over 956536.60 frames. ], batch size: 45, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:20:48,054 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4523, 3.8940, 4.0603, 4.3451, 4.1914, 3.9549, 4.5691, 1.3772], device='cuda:6'), covar=tensor([0.0787, 0.0828, 0.0812, 0.0884, 0.1217, 0.1509, 0.0679, 0.5361], device='cuda:6'), in_proj_covar=tensor([0.0355, 0.0242, 0.0275, 0.0293, 0.0334, 0.0284, 0.0302, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:20:56,525 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:21:00,257 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5631, 1.4734, 2.2191, 3.2939, 2.2063, 2.3616, 0.6657, 2.4450], device='cuda:6'), covar=tensor([0.1694, 0.1533, 0.1181, 0.0574, 0.0796, 0.1458, 0.2139, 0.0648], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0118, 0.0136, 0.0167, 0.0102, 0.0142, 0.0130, 0.0104], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 06:21:16,590 INFO [finetune.py:976] (6/7) Epoch 6, batch 3750, loss[loss=0.2464, simple_loss=0.2915, pruned_loss=0.1006, over 4825.00 frames. ], tot_loss[loss=0.2146, simple_loss=0.2751, pruned_loss=0.07707, over 953354.95 frames. ], batch size: 47, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:21:37,056 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:21:48,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.626e+02 1.929e+02 2.294e+02 3.909e+02, threshold=3.857e+02, percent-clipped=0.0 2023-03-26 06:21:58,658 INFO [finetune.py:976] (6/7) Epoch 6, batch 3800, loss[loss=0.2308, simple_loss=0.2969, pruned_loss=0.0823, over 4915.00 frames. ], tot_loss[loss=0.2156, simple_loss=0.2762, pruned_loss=0.07748, over 952304.27 frames. ], batch size: 33, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:22:01,797 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:24,390 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:31,248 INFO [finetune.py:976] (6/7) Epoch 6, batch 3850, loss[loss=0.19, simple_loss=0.2666, pruned_loss=0.05673, over 4784.00 frames. ], tot_loss[loss=0.2141, simple_loss=0.2749, pruned_loss=0.07664, over 953518.07 frames. ], batch size: 29, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:22:33,638 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:22:46,307 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6143, 1.6543, 1.3797, 1.4124, 1.8719, 1.9520, 1.7102, 1.4465], device='cuda:6'), covar=tensor([0.0305, 0.0285, 0.0540, 0.0317, 0.0205, 0.0383, 0.0243, 0.0379], device='cuda:6'), in_proj_covar=tensor([0.0087, 0.0111, 0.0137, 0.0116, 0.0103, 0.0099, 0.0090, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.7839e-05, 8.7421e-05, 1.0969e-04, 9.1349e-05, 8.1288e-05, 7.3675e-05, 6.8725e-05, 8.3888e-05], device='cuda:6') 2023-03-26 06:22:54,000 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.617e+01 1.620e+02 2.056e+02 2.694e+02 5.558e+02, threshold=4.113e+02, percent-clipped=4.0 2023-03-26 06:22:55,288 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32527.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:00,107 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32534.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:04,463 INFO [finetune.py:976] (6/7) Epoch 6, batch 3900, loss[loss=0.2446, simple_loss=0.2833, pruned_loss=0.1029, over 4927.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2723, pruned_loss=0.07574, over 953964.50 frames. ], batch size: 33, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:23:12,413 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0657, 1.8364, 1.6596, 1.8103, 2.0281, 1.7511, 2.3037, 2.0278], device='cuda:6'), covar=tensor([0.1423, 0.2927, 0.3409, 0.3092, 0.2639, 0.1761, 0.3599, 0.2091], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0236, 0.0254, 0.0232, 0.0191, 0.0212, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:23:24,749 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32571.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:31,335 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32582.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:23:35,326 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 06:23:36,048 INFO [finetune.py:976] (6/7) Epoch 6, batch 3950, loss[loss=0.1918, simple_loss=0.2476, pruned_loss=0.06805, over 4748.00 frames. ], tot_loss[loss=0.2092, simple_loss=0.2694, pruned_loss=0.07449, over 955373.63 frames. ], batch size: 59, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:23:59,361 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=32606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:24:07,664 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32619.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:24:11,253 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.672e+02 2.141e+02 2.509e+02 4.478e+02, threshold=4.281e+02, percent-clipped=2.0 2023-03-26 06:24:20,842 INFO [finetune.py:976] (6/7) Epoch 6, batch 4000, loss[loss=0.2254, simple_loss=0.2773, pruned_loss=0.08681, over 4133.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2689, pruned_loss=0.07517, over 952189.16 frames. ], batch size: 65, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:24:26,730 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-26 06:24:30,957 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=32654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:24:31,669 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0175, 1.9391, 1.6683, 2.0093, 1.8811, 1.8134, 1.7999, 2.7433], device='cuda:6'), covar=tensor([0.6335, 0.8033, 0.5008, 0.7028, 0.6562, 0.3632, 0.7044, 0.2213], device='cuda:6'), in_proj_covar=tensor([0.0282, 0.0258, 0.0220, 0.0283, 0.0239, 0.0203, 0.0244, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:25:04,480 INFO [finetune.py:976] (6/7) Epoch 6, batch 4050, loss[loss=0.262, simple_loss=0.3198, pruned_loss=0.102, over 4813.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2723, pruned_loss=0.07629, over 952176.90 frames. ], batch size: 39, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:25:13,996 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6022, 1.4170, 2.1748, 3.3009, 2.2370, 2.2684, 0.8756, 2.5750], device='cuda:6'), covar=tensor([0.1775, 0.1618, 0.1223, 0.0620, 0.0816, 0.1657, 0.1932, 0.0570], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0119, 0.0136, 0.0167, 0.0103, 0.0142, 0.0129, 0.0104], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 06:25:28,436 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.831e+02 2.133e+02 2.637e+02 5.226e+02, threshold=4.267e+02, percent-clipped=1.0 2023-03-26 06:25:42,718 INFO [finetune.py:976] (6/7) Epoch 6, batch 4100, loss[loss=0.1967, simple_loss=0.2689, pruned_loss=0.06221, over 4754.00 frames. ], tot_loss[loss=0.2162, simple_loss=0.2759, pruned_loss=0.07823, over 950507.15 frames. ], batch size: 28, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:26:19,785 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.56 vs. limit=2.0 2023-03-26 06:26:34,196 INFO [finetune.py:976] (6/7) Epoch 6, batch 4150, loss[loss=0.2524, simple_loss=0.3187, pruned_loss=0.09302, over 4906.00 frames. ], tot_loss[loss=0.2182, simple_loss=0.2777, pruned_loss=0.07933, over 949785.80 frames. ], batch size: 36, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:26:38,688 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-26 06:27:15,990 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 06:27:18,093 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.203e+02 1.784e+02 2.149e+02 2.523e+02 4.029e+02, threshold=4.299e+02, percent-clipped=0.0 2023-03-26 06:27:37,292 INFO [finetune.py:976] (6/7) Epoch 6, batch 4200, loss[loss=0.2406, simple_loss=0.289, pruned_loss=0.09605, over 4894.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2767, pruned_loss=0.07818, over 949625.83 frames. ], batch size: 43, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:27:40,482 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2748, 2.8952, 3.0237, 3.2344, 3.0593, 2.8829, 3.3170, 0.9869], device='cuda:6'), covar=tensor([0.1023, 0.0898, 0.1032, 0.1040, 0.1467, 0.1660, 0.1142, 0.4826], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0239, 0.0272, 0.0291, 0.0330, 0.0281, 0.0299, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:28:34,722 INFO [finetune.py:976] (6/7) Epoch 6, batch 4250, loss[loss=0.2422, simple_loss=0.2883, pruned_loss=0.09803, over 4889.00 frames. ], tot_loss[loss=0.2151, simple_loss=0.2748, pruned_loss=0.07767, over 950450.18 frames. ], batch size: 35, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:29:25,481 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.595e+02 1.920e+02 2.303e+02 3.727e+02, threshold=3.841e+02, percent-clipped=0.0 2023-03-26 06:29:44,596 INFO [finetune.py:976] (6/7) Epoch 6, batch 4300, loss[loss=0.174, simple_loss=0.2349, pruned_loss=0.05652, over 4755.00 frames. ], tot_loss[loss=0.2112, simple_loss=0.2705, pruned_loss=0.07592, over 949625.29 frames. ], batch size: 23, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:30:17,448 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=32966.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:30:47,043 INFO [finetune.py:976] (6/7) Epoch 6, batch 4350, loss[loss=0.229, simple_loss=0.3001, pruned_loss=0.07895, over 4805.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2674, pruned_loss=0.07431, over 949609.44 frames. ], batch size: 39, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:31:22,491 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 06:31:32,013 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.730e+02 2.041e+02 2.589e+02 3.941e+02, threshold=4.082e+02, percent-clipped=1.0 2023-03-26 06:31:38,475 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33027.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:31:50,139 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8648, 1.4874, 1.0049, 1.8981, 2.1158, 1.7960, 1.7203, 1.8607], device='cuda:6'), covar=tensor([0.1297, 0.1807, 0.2145, 0.1041, 0.1929, 0.1824, 0.1207, 0.1541], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 06:31:50,659 INFO [finetune.py:976] (6/7) Epoch 6, batch 4400, loss[loss=0.2478, simple_loss=0.3128, pruned_loss=0.09142, over 4177.00 frames. ], tot_loss[loss=0.2089, simple_loss=0.2683, pruned_loss=0.07477, over 950618.93 frames. ], batch size: 66, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:32:31,736 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33069.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:32:54,583 INFO [finetune.py:976] (6/7) Epoch 6, batch 4450, loss[loss=0.2124, simple_loss=0.2761, pruned_loss=0.07432, over 4855.00 frames. ], tot_loss[loss=0.2113, simple_loss=0.2717, pruned_loss=0.0754, over 950527.68 frames. ], batch size: 31, lr: 3.90e-03, grad_scale: 16.0 2023-03-26 06:33:15,639 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.0886, 3.5611, 3.7190, 3.9926, 3.8348, 3.6502, 4.1724, 1.3354], device='cuda:6'), covar=tensor([0.0785, 0.0835, 0.0762, 0.0836, 0.1262, 0.1515, 0.0697, 0.5058], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0239, 0.0271, 0.0291, 0.0330, 0.0281, 0.0300, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:33:39,235 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.174e+02 1.714e+02 2.125e+02 2.690e+02 4.211e+02, threshold=4.250e+02, percent-clipped=1.0 2023-03-26 06:33:48,020 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:33:58,505 INFO [finetune.py:976] (6/7) Epoch 6, batch 4500, loss[loss=0.2256, simple_loss=0.2802, pruned_loss=0.08556, over 4827.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2736, pruned_loss=0.07558, over 953096.89 frames. ], batch size: 30, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:34:19,015 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33155.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:34:59,966 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6365, 1.6666, 1.3321, 1.5550, 1.8351, 1.8077, 1.5947, 1.4159], device='cuda:6'), covar=tensor([0.0274, 0.0276, 0.0501, 0.0282, 0.0178, 0.0413, 0.0286, 0.0364], device='cuda:6'), in_proj_covar=tensor([0.0087, 0.0111, 0.0137, 0.0116, 0.0103, 0.0099, 0.0090, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.7712e-05, 8.7207e-05, 1.0947e-04, 9.1130e-05, 8.1497e-05, 7.3750e-05, 6.8936e-05, 8.4566e-05], device='cuda:6') 2023-03-26 06:35:01,061 INFO [finetune.py:976] (6/7) Epoch 6, batch 4550, loss[loss=0.3512, simple_loss=0.3754, pruned_loss=0.1635, over 4167.00 frames. ], tot_loss[loss=0.2154, simple_loss=0.276, pruned_loss=0.07742, over 950890.90 frames. ], batch size: 66, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:35:33,176 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33216.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:35:44,645 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.755e+02 2.084e+02 2.335e+02 4.621e+02, threshold=4.168e+02, percent-clipped=2.0 2023-03-26 06:36:05,054 INFO [finetune.py:976] (6/7) Epoch 6, batch 4600, loss[loss=0.1736, simple_loss=0.2347, pruned_loss=0.05626, over 4706.00 frames. ], tot_loss[loss=0.2142, simple_loss=0.2752, pruned_loss=0.07658, over 951098.57 frames. ], batch size: 23, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:36:23,883 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6425, 1.2471, 0.9712, 1.6429, 2.0770, 1.3642, 1.5058, 1.7017], device='cuda:6'), covar=tensor([0.1341, 0.1940, 0.1969, 0.1090, 0.1891, 0.1901, 0.1251, 0.1605], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0114, 0.0092, 0.0123, 0.0095, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 06:36:25,092 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4253, 1.3397, 1.4148, 0.7668, 1.5457, 1.5128, 1.4390, 1.2870], device='cuda:6'), covar=tensor([0.0646, 0.0757, 0.0757, 0.1018, 0.0742, 0.0759, 0.0665, 0.1202], device='cuda:6'), in_proj_covar=tensor([0.0140, 0.0134, 0.0145, 0.0128, 0.0114, 0.0146, 0.0146, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:37:07,064 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3851, 2.1129, 1.7734, 0.9228, 2.0010, 1.8873, 1.6186, 1.9274], device='cuda:6'), covar=tensor([0.0932, 0.0837, 0.1472, 0.1965, 0.1527, 0.2082, 0.2088, 0.1008], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0200, 0.0199, 0.0187, 0.0214, 0.0206, 0.0218, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:37:07,546 INFO [finetune.py:976] (6/7) Epoch 6, batch 4650, loss[loss=0.223, simple_loss=0.2816, pruned_loss=0.08226, over 4932.00 frames. ], tot_loss[loss=0.2127, simple_loss=0.2729, pruned_loss=0.0762, over 951151.36 frames. ], batch size: 46, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:37:37,419 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9658, 1.6534, 2.4265, 1.4184, 2.0010, 2.1421, 1.5464, 2.2693], device='cuda:6'), covar=tensor([0.1800, 0.2388, 0.1428, 0.2422, 0.1298, 0.1862, 0.3149, 0.1308], device='cuda:6'), in_proj_covar=tensor([0.0204, 0.0204, 0.0198, 0.0196, 0.0183, 0.0220, 0.0218, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:37:48,445 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33322.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:37:50,391 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-26 06:37:50,591 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.572e+02 1.961e+02 2.434e+02 3.752e+02, threshold=3.921e+02, percent-clipped=0.0 2023-03-26 06:38:10,952 INFO [finetune.py:976] (6/7) Epoch 6, batch 4700, loss[loss=0.165, simple_loss=0.2363, pruned_loss=0.04685, over 4881.00 frames. ], tot_loss[loss=0.2094, simple_loss=0.2693, pruned_loss=0.07471, over 953094.07 frames. ], batch size: 31, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:39:02,066 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 06:39:20,540 INFO [finetune.py:976] (6/7) Epoch 6, batch 4750, loss[loss=0.2019, simple_loss=0.2494, pruned_loss=0.07715, over 4793.00 frames. ], tot_loss[loss=0.2103, simple_loss=0.269, pruned_loss=0.07575, over 953369.91 frames. ], batch size: 25, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:40:04,157 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.633e+02 1.917e+02 2.350e+02 3.562e+02, threshold=3.835e+02, percent-clipped=0.0 2023-03-26 06:40:04,242 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:40:24,232 INFO [finetune.py:976] (6/7) Epoch 6, batch 4800, loss[loss=0.2301, simple_loss=0.301, pruned_loss=0.07964, over 4924.00 frames. ], tot_loss[loss=0.2139, simple_loss=0.2731, pruned_loss=0.07733, over 954208.09 frames. ], batch size: 36, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:40:58,601 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2615, 2.1112, 2.1787, 1.0824, 2.3915, 2.6510, 2.2722, 2.0183], device='cuda:6'), covar=tensor([0.1078, 0.0753, 0.0543, 0.0772, 0.0648, 0.0662, 0.0420, 0.0756], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0159, 0.0122, 0.0139, 0.0134, 0.0125, 0.0148, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.8016e-05, 1.1787e-04, 8.8596e-05, 1.0141e-04, 9.6027e-05, 9.2700e-05, 1.0992e-04, 1.0851e-04], device='cuda:6') 2023-03-26 06:41:07,491 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33472.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:41:28,618 INFO [finetune.py:976] (6/7) Epoch 6, batch 4850, loss[loss=0.2126, simple_loss=0.2818, pruned_loss=0.07172, over 4809.00 frames. ], tot_loss[loss=0.2161, simple_loss=0.2761, pruned_loss=0.07807, over 955914.75 frames. ], batch size: 41, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:41:38,171 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6198, 1.4341, 1.6109, 1.8784, 1.4365, 3.1009, 1.3594, 1.5475], device='cuda:6'), covar=tensor([0.0932, 0.1731, 0.1256, 0.0940, 0.1711, 0.0263, 0.1476, 0.1718], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0093, 0.0084, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 06:41:42,316 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4181, 1.4686, 1.7279, 1.8276, 1.4981, 3.3812, 1.3378, 1.6120], device='cuda:6'), covar=tensor([0.1021, 0.1684, 0.1223, 0.1009, 0.1693, 0.0233, 0.1437, 0.1754], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0093, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 06:41:59,913 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33511.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:42:14,022 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.841e+02 2.130e+02 2.477e+02 4.983e+02, threshold=4.260e+02, percent-clipped=3.0 2023-03-26 06:42:24,110 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33533.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:42:32,811 INFO [finetune.py:976] (6/7) Epoch 6, batch 4900, loss[loss=0.2163, simple_loss=0.2735, pruned_loss=0.07952, over 4862.00 frames. ], tot_loss[loss=0.2183, simple_loss=0.2788, pruned_loss=0.07897, over 957463.41 frames. ], batch size: 31, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:43:36,446 INFO [finetune.py:976] (6/7) Epoch 6, batch 4950, loss[loss=0.2363, simple_loss=0.3017, pruned_loss=0.0854, over 4776.00 frames. ], tot_loss[loss=0.2187, simple_loss=0.2797, pruned_loss=0.07882, over 956269.59 frames. ], batch size: 51, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:43:56,122 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.46 vs. limit=5.0 2023-03-26 06:43:57,623 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1221, 2.1215, 2.0008, 1.4484, 2.3570, 2.2926, 2.1464, 1.8771], device='cuda:6'), covar=tensor([0.0657, 0.0584, 0.0905, 0.1056, 0.0433, 0.0758, 0.0693, 0.1022], device='cuda:6'), in_proj_covar=tensor([0.0139, 0.0134, 0.0144, 0.0128, 0.0113, 0.0145, 0.0145, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:44:09,092 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9796, 1.7765, 1.5864, 1.6412, 1.6505, 1.6305, 1.6565, 2.3468], device='cuda:6'), covar=tensor([0.4411, 0.4750, 0.3815, 0.4253, 0.4443, 0.2845, 0.4638, 0.1807], device='cuda:6'), in_proj_covar=tensor([0.0283, 0.0259, 0.0221, 0.0283, 0.0240, 0.0204, 0.0245, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:44:20,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33622.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:44:22,690 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.562e+01 1.670e+02 2.087e+02 2.382e+02 5.310e+02, threshold=4.173e+02, percent-clipped=3.0 2023-03-26 06:44:41,928 INFO [finetune.py:976] (6/7) Epoch 6, batch 5000, loss[loss=0.1969, simple_loss=0.2628, pruned_loss=0.06555, over 4787.00 frames. ], tot_loss[loss=0.2157, simple_loss=0.2768, pruned_loss=0.07728, over 956702.24 frames. ], batch size: 45, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:45:10,836 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 06:45:14,311 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=33670.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:45:15,266 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.56 vs. limit=5.0 2023-03-26 06:45:19,039 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:45:22,671 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 06:45:26,107 INFO [finetune.py:976] (6/7) Epoch 6, batch 5050, loss[loss=0.2214, simple_loss=0.279, pruned_loss=0.08187, over 4285.00 frames. ], tot_loss[loss=0.2142, simple_loss=0.2746, pruned_loss=0.0769, over 956115.16 frames. ], batch size: 65, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:45:50,298 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.621e+02 1.877e+02 2.380e+02 3.773e+02, threshold=3.754e+02, percent-clipped=0.0 2023-03-26 06:45:50,432 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33725.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:45:57,669 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 06:45:58,839 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=33738.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 06:45:59,300 INFO [finetune.py:976] (6/7) Epoch 6, batch 5100, loss[loss=0.2386, simple_loss=0.2878, pruned_loss=0.09466, over 4825.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.2707, pruned_loss=0.07502, over 956835.59 frames. ], batch size: 40, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:46:22,568 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=33773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:46:32,749 INFO [finetune.py:976] (6/7) Epoch 6, batch 5150, loss[loss=0.2494, simple_loss=0.3022, pruned_loss=0.09827, over 4833.00 frames. ], tot_loss[loss=0.2128, simple_loss=0.2727, pruned_loss=0.07641, over 954730.71 frames. ], batch size: 33, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:46:48,247 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=33811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:46:57,550 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.768e+02 2.107e+02 2.614e+02 3.782e+02, threshold=4.214e+02, percent-clipped=1.0 2023-03-26 06:46:59,466 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=33828.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:47:06,626 INFO [finetune.py:976] (6/7) Epoch 6, batch 5200, loss[loss=0.2975, simple_loss=0.3462, pruned_loss=0.1244, over 4798.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2765, pruned_loss=0.07777, over 955455.49 frames. ], batch size: 41, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:47:20,011 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=33859.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:47:25,753 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0248, 1.8063, 1.6455, 1.7680, 2.0776, 1.7523, 2.2093, 1.9650], device='cuda:6'), covar=tensor([0.1702, 0.2923, 0.3815, 0.3289, 0.2909, 0.2023, 0.3342, 0.2374], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0190, 0.0235, 0.0254, 0.0232, 0.0191, 0.0212, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:47:34,692 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7801, 1.6082, 1.5922, 1.7104, 1.0298, 3.5706, 1.2970, 2.0034], device='cuda:6'), covar=tensor([0.3144, 0.2335, 0.2016, 0.2219, 0.1873, 0.0171, 0.2544, 0.1215], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0115, 0.0118, 0.0122, 0.0118, 0.0098, 0.0102, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 06:47:44,971 INFO [finetune.py:976] (6/7) Epoch 6, batch 5250, loss[loss=0.2137, simple_loss=0.2651, pruned_loss=0.08119, over 4691.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2783, pruned_loss=0.07737, over 958255.61 frames. ], batch size: 23, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:47:50,348 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 06:48:10,023 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.728e+02 2.112e+02 2.559e+02 4.196e+02, threshold=4.224e+02, percent-clipped=0.0 2023-03-26 06:48:18,712 INFO [finetune.py:976] (6/7) Epoch 6, batch 5300, loss[loss=0.2712, simple_loss=0.3189, pruned_loss=0.1117, over 4167.00 frames. ], tot_loss[loss=0.2172, simple_loss=0.279, pruned_loss=0.07766, over 957687.27 frames. ], batch size: 66, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:48:19,381 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=33939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:48:53,985 INFO [finetune.py:976] (6/7) Epoch 6, batch 5350, loss[loss=0.2545, simple_loss=0.2993, pruned_loss=0.1048, over 4909.00 frames. ], tot_loss[loss=0.2173, simple_loss=0.2794, pruned_loss=0.07765, over 955545.92 frames. ], batch size: 36, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:49:07,564 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34000.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:49:40,307 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.730e+02 2.013e+02 2.437e+02 5.230e+02, threshold=4.026e+02, percent-clipped=3.0 2023-03-26 06:49:50,509 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34033.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 06:49:59,342 INFO [finetune.py:976] (6/7) Epoch 6, batch 5400, loss[loss=0.2195, simple_loss=0.274, pruned_loss=0.08256, over 4818.00 frames. ], tot_loss[loss=0.2149, simple_loss=0.2759, pruned_loss=0.07689, over 955878.64 frames. ], batch size: 39, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:50:02,508 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34044.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:50:51,576 INFO [finetune.py:976] (6/7) Epoch 6, batch 5450, loss[loss=0.1741, simple_loss=0.2424, pruned_loss=0.05287, over 4793.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2721, pruned_loss=0.07544, over 954174.81 frames. ], batch size: 29, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:51:04,655 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34105.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:04,662 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7833, 1.5651, 1.5555, 1.7707, 2.6104, 1.8521, 1.5576, 1.4386], device='cuda:6'), covar=tensor([0.2582, 0.2617, 0.2247, 0.2134, 0.1774, 0.1499, 0.2797, 0.2304], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0210, 0.0204, 0.0187, 0.0239, 0.0177, 0.0214, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:51:17,056 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.588e+02 1.777e+02 2.163e+02 4.002e+02, threshold=3.553e+02, percent-clipped=0.0 2023-03-26 06:51:18,882 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2772, 2.0771, 2.2209, 1.2035, 2.3091, 2.5736, 2.2037, 2.0620], device='cuda:6'), covar=tensor([0.0865, 0.0668, 0.0535, 0.0643, 0.0631, 0.0596, 0.0491, 0.0615], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0120, 0.0137, 0.0131, 0.0123, 0.0146, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.6822e-05, 1.1582e-04, 8.7133e-05, 1.0012e-04, 9.3903e-05, 9.1122e-05, 1.0814e-04, 1.0697e-04], device='cuda:6') 2023-03-26 06:51:19,926 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34128.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:26,243 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9550, 1.7457, 1.5327, 1.5298, 1.9680, 1.6621, 2.1467, 1.9213], device='cuda:6'), covar=tensor([0.1507, 0.2879, 0.3660, 0.3192, 0.2872, 0.1885, 0.3714, 0.2075], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0235, 0.0255, 0.0233, 0.0191, 0.0213, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 06:51:27,294 INFO [finetune.py:976] (6/7) Epoch 6, batch 5500, loss[loss=0.2055, simple_loss=0.2671, pruned_loss=0.07189, over 4875.00 frames. ], tot_loss[loss=0.2084, simple_loss=0.2686, pruned_loss=0.0741, over 955325.12 frames. ], batch size: 34, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:51:31,022 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34145.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:51:45,462 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 06:51:50,703 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:52:00,940 INFO [finetune.py:976] (6/7) Epoch 6, batch 5550, loss[loss=0.1967, simple_loss=0.2536, pruned_loss=0.06991, over 4247.00 frames. ], tot_loss[loss=0.211, simple_loss=0.2709, pruned_loss=0.07555, over 954509.21 frames. ], batch size: 18, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:52:15,693 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:52:37,390 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.703e+02 1.979e+02 2.376e+02 4.570e+02, threshold=3.959e+02, percent-clipped=2.0 2023-03-26 06:52:55,405 INFO [finetune.py:976] (6/7) Epoch 6, batch 5600, loss[loss=0.215, simple_loss=0.2851, pruned_loss=0.07247, over 4899.00 frames. ], tot_loss[loss=0.2164, simple_loss=0.277, pruned_loss=0.07787, over 956238.80 frames. ], batch size: 37, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:53:34,880 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5050, 2.4215, 2.2567, 2.5687, 2.4973, 4.9278, 2.4095, 3.3893], device='cuda:6'), covar=tensor([0.3117, 0.2196, 0.1832, 0.1985, 0.1336, 0.0098, 0.1892, 0.0834], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0117, 0.0098, 0.0102, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 06:53:54,497 INFO [finetune.py:976] (6/7) Epoch 6, batch 5650, loss[loss=0.1826, simple_loss=0.2558, pruned_loss=0.05472, over 4914.00 frames. ], tot_loss[loss=0.2187, simple_loss=0.2797, pruned_loss=0.07884, over 954205.93 frames. ], batch size: 37, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:53:58,428 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34295.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:54:35,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.805e+01 1.675e+02 2.029e+02 2.481e+02 4.265e+02, threshold=4.057e+02, percent-clipped=3.0 2023-03-26 06:54:44,901 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:54:47,712 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 06:54:47,951 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.8756, 1.9634, 2.1567, 1.1842, 2.0733, 2.1565, 2.2206, 1.7532], device='cuda:6'), covar=tensor([0.1064, 0.0673, 0.0365, 0.0611, 0.0375, 0.0839, 0.0332, 0.0665], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0121, 0.0139, 0.0132, 0.0124, 0.0147, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.7119e-05, 1.1650e-04, 8.7714e-05, 1.0109e-04, 9.4454e-05, 9.1692e-05, 1.0874e-04, 1.0769e-04], device='cuda:6') 2023-03-26 06:54:48,433 INFO [finetune.py:976] (6/7) Epoch 6, batch 5700, loss[loss=0.1962, simple_loss=0.2409, pruned_loss=0.07572, over 4142.00 frames. ], tot_loss[loss=0.2148, simple_loss=0.2741, pruned_loss=0.07772, over 932822.22 frames. ], batch size: 17, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:55:38,390 INFO [finetune.py:976] (6/7) Epoch 7, batch 0, loss[loss=0.1782, simple_loss=0.2527, pruned_loss=0.05189, over 4898.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2527, pruned_loss=0.05189, over 4898.00 frames. ], batch size: 43, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:55:38,390 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 06:55:55,927 INFO [finetune.py:1010] (6/7) Epoch 7, validation: loss=0.165, simple_loss=0.2365, pruned_loss=0.04677, over 2265189.00 frames. 2023-03-26 06:55:55,927 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6300MB 2023-03-26 06:56:14,683 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34381.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:56:36,915 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34400.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:56:59,207 INFO [finetune.py:976] (6/7) Epoch 7, batch 50, loss[loss=0.1595, simple_loss=0.2288, pruned_loss=0.04507, over 4760.00 frames. ], tot_loss[loss=0.2097, simple_loss=0.2717, pruned_loss=0.07385, over 217105.11 frames. ], batch size: 27, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:57:04,146 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5638, 1.5057, 1.7808, 1.7867, 1.6240, 3.5905, 1.2801, 1.6047], device='cuda:6'), covar=tensor([0.1011, 0.1784, 0.1137, 0.1009, 0.1673, 0.0228, 0.1601, 0.1772], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0076, 0.0079, 0.0092, 0.0083, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 06:57:09,545 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.600e+02 2.022e+02 2.565e+02 5.766e+02, threshold=4.045e+02, percent-clipped=4.0 2023-03-26 06:57:19,396 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 06:58:05,575 INFO [finetune.py:976] (6/7) Epoch 7, batch 100, loss[loss=0.1774, simple_loss=0.2353, pruned_loss=0.05981, over 4820.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2643, pruned_loss=0.0713, over 380260.82 frames. ], batch size: 39, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:58:29,312 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5617, 1.4728, 1.8556, 1.9642, 1.5441, 3.4034, 1.3488, 1.5844], device='cuda:6'), covar=tensor([0.1014, 0.1794, 0.1125, 0.0945, 0.1583, 0.0275, 0.1504, 0.1724], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0080, 0.0076, 0.0079, 0.0092, 0.0083, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 06:58:45,258 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 06:59:06,469 INFO [finetune.py:976] (6/7) Epoch 7, batch 150, loss[loss=0.2302, simple_loss=0.2834, pruned_loss=0.08853, over 4864.00 frames. ], tot_loss[loss=0.2018, simple_loss=0.2614, pruned_loss=0.07106, over 508377.77 frames. ], batch size: 31, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 06:59:17,659 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.007e+02 1.586e+02 1.902e+02 2.328e+02 6.438e+02, threshold=3.804e+02, percent-clipped=3.0 2023-03-26 06:59:26,647 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2943, 2.0935, 1.8111, 2.2834, 2.2736, 1.9653, 2.7041, 2.2502], device='cuda:6'), covar=tensor([0.1632, 0.2787, 0.3604, 0.2955, 0.2627, 0.1988, 0.3367, 0.2093], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0237, 0.0257, 0.0234, 0.0193, 0.0214, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:00:08,602 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8347, 1.3101, 1.6867, 1.6645, 1.4514, 1.4791, 1.5884, 1.5898], device='cuda:6'), covar=tensor([0.4933, 0.6213, 0.5136, 0.5515, 0.6404, 0.5207, 0.7199, 0.4874], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0243, 0.0252, 0.0254, 0.0241, 0.0218, 0.0271, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:00:10,779 INFO [finetune.py:976] (6/7) Epoch 7, batch 200, loss[loss=0.245, simple_loss=0.2975, pruned_loss=0.0963, over 4820.00 frames. ], tot_loss[loss=0.2048, simple_loss=0.2635, pruned_loss=0.07312, over 609290.10 frames. ], batch size: 51, lr: 3.89e-03, grad_scale: 32.0 2023-03-26 07:00:42,238 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:00:43,461 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:00,786 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5887, 1.4979, 1.3974, 1.6768, 1.8821, 1.6455, 1.1291, 1.3736], device='cuda:6'), covar=tensor([0.1981, 0.2094, 0.1759, 0.1510, 0.1782, 0.1151, 0.2804, 0.1696], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0209, 0.0203, 0.0187, 0.0238, 0.0176, 0.0213, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:01:13,423 INFO [finetune.py:976] (6/7) Epoch 7, batch 250, loss[loss=0.2308, simple_loss=0.2738, pruned_loss=0.09392, over 4886.00 frames. ], tot_loss[loss=0.2112, simple_loss=0.2704, pruned_loss=0.07598, over 685203.68 frames. ], batch size: 32, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:01:22,747 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34622.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:24,454 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.737e+02 2.023e+02 2.550e+02 3.958e+02, threshold=4.047e+02, percent-clipped=1.0 2023-03-26 07:01:25,218 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34626.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:01:45,029 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:02:01,689 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:02:14,153 INFO [finetune.py:976] (6/7) Epoch 7, batch 300, loss[loss=0.1864, simple_loss=0.2569, pruned_loss=0.05796, over 4909.00 frames. ], tot_loss[loss=0.214, simple_loss=0.2746, pruned_loss=0.07672, over 742808.66 frames. ], batch size: 43, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:02:34,493 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34683.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:02:41,972 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34687.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:02:55,202 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34700.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:02:55,848 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0476, 1.8857, 1.5937, 1.9026, 1.9554, 1.7307, 2.4030, 2.0752], device='cuda:6'), covar=tensor([0.1595, 0.3134, 0.3638, 0.3274, 0.2937, 0.1929, 0.3770, 0.2071], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0238, 0.0257, 0.0235, 0.0194, 0.0214, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:02:56,431 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4155, 1.4555, 1.5791, 1.8134, 1.5686, 3.1892, 1.3702, 1.5864], device='cuda:6'), covar=tensor([0.0971, 0.1684, 0.1132, 0.0937, 0.1553, 0.0261, 0.1371, 0.1626], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0076, 0.0079, 0.0092, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 07:02:57,017 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1873, 1.8204, 1.8444, 0.8630, 1.8781, 2.2830, 1.9562, 1.8278], device='cuda:6'), covar=tensor([0.0869, 0.0701, 0.0661, 0.0721, 0.0821, 0.0435, 0.0555, 0.0673], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0159, 0.0121, 0.0139, 0.0132, 0.0125, 0.0146, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.7375e-05, 1.1726e-04, 8.7471e-05, 1.0107e-04, 9.4702e-05, 9.2156e-05, 1.0831e-04, 1.0786e-04], device='cuda:6') 2023-03-26 07:03:06,653 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=34710.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:03:15,677 INFO [finetune.py:976] (6/7) Epoch 7, batch 350, loss[loss=0.2109, simple_loss=0.2676, pruned_loss=0.0771, over 4295.00 frames. ], tot_loss[loss=0.2162, simple_loss=0.2767, pruned_loss=0.07786, over 787618.75 frames. ], batch size: 65, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:03:27,163 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.664e+02 2.043e+02 2.496e+02 5.690e+02, threshold=4.087e+02, percent-clipped=3.0 2023-03-26 07:03:35,281 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 07:03:47,806 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 07:03:54,753 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:04:16,439 INFO [finetune.py:976] (6/7) Epoch 7, batch 400, loss[loss=0.2191, simple_loss=0.2647, pruned_loss=0.08673, over 4772.00 frames. ], tot_loss[loss=0.216, simple_loss=0.2768, pruned_loss=0.07759, over 824283.70 frames. ], batch size: 28, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:04:16,559 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3845, 2.5559, 2.1105, 1.7226, 2.6041, 2.7221, 2.5621, 2.2340], device='cuda:6'), covar=tensor([0.0640, 0.0528, 0.0851, 0.0924, 0.0581, 0.0632, 0.0602, 0.0870], device='cuda:6'), in_proj_covar=tensor([0.0139, 0.0134, 0.0144, 0.0128, 0.0113, 0.0145, 0.0146, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:04:24,048 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=34771.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 07:04:55,085 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=34801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:05:14,338 INFO [finetune.py:976] (6/7) Epoch 7, batch 450, loss[loss=0.1728, simple_loss=0.2318, pruned_loss=0.05689, over 4219.00 frames. ], tot_loss[loss=0.2148, simple_loss=0.2752, pruned_loss=0.07717, over 852507.87 frames. ], batch size: 65, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:05:14,478 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5349, 1.3600, 1.2471, 1.4166, 1.6471, 1.6429, 1.4797, 1.2301], device='cuda:6'), covar=tensor([0.0316, 0.0331, 0.0595, 0.0314, 0.0273, 0.0472, 0.0360, 0.0440], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0112, 0.0140, 0.0117, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.9221e-05, 8.8248e-05, 1.1167e-04, 9.2507e-05, 8.3212e-05, 7.4797e-05, 6.9568e-05, 8.5708e-05], device='cuda:6') 2023-03-26 07:05:25,983 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.133e+01 1.634e+02 1.827e+02 2.211e+02 3.915e+02, threshold=3.654e+02, percent-clipped=0.0 2023-03-26 07:05:50,919 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=34849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:06:11,591 INFO [finetune.py:976] (6/7) Epoch 7, batch 500, loss[loss=0.2146, simple_loss=0.272, pruned_loss=0.07858, over 4908.00 frames. ], tot_loss[loss=0.2117, simple_loss=0.2716, pruned_loss=0.0759, over 873787.30 frames. ], batch size: 36, lr: 3.88e-03, grad_scale: 32.0 2023-03-26 07:06:40,322 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 07:07:15,208 INFO [finetune.py:976] (6/7) Epoch 7, batch 550, loss[loss=0.1823, simple_loss=0.2556, pruned_loss=0.05455, over 4771.00 frames. ], tot_loss[loss=0.2076, simple_loss=0.2677, pruned_loss=0.07376, over 892695.44 frames. ], batch size: 26, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:07:26,485 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.074e+02 1.634e+02 2.013e+02 2.383e+02 4.182e+02, threshold=4.026e+02, percent-clipped=3.0 2023-03-26 07:07:53,599 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:08:14,354 INFO [finetune.py:976] (6/7) Epoch 7, batch 600, loss[loss=0.1969, simple_loss=0.2486, pruned_loss=0.07255, over 4696.00 frames. ], tot_loss[loss=0.2097, simple_loss=0.2696, pruned_loss=0.07488, over 908302.53 frames. ], batch size: 23, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:08:19,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8530, 1.8584, 1.9167, 1.1862, 1.9616, 1.9024, 1.9191, 1.6985], device='cuda:6'), covar=tensor([0.0610, 0.0628, 0.0691, 0.0927, 0.0547, 0.0827, 0.0622, 0.1066], device='cuda:6'), in_proj_covar=tensor([0.0138, 0.0133, 0.0143, 0.0127, 0.0113, 0.0144, 0.0145, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:08:31,419 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34978.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:08:35,298 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=34982.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:08:44,960 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0178, 1.9211, 1.8230, 2.1267, 2.5456, 2.1613, 1.5772, 1.6740], device='cuda:6'), covar=tensor([0.2259, 0.2063, 0.1904, 0.1679, 0.1642, 0.1020, 0.2601, 0.1851], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0209, 0.0204, 0.0186, 0.0237, 0.0176, 0.0212, 0.0190], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:09:06,445 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 07:09:16,193 INFO [finetune.py:976] (6/7) Epoch 7, batch 650, loss[loss=0.2202, simple_loss=0.286, pruned_loss=0.07715, over 4927.00 frames. ], tot_loss[loss=0.2124, simple_loss=0.2731, pruned_loss=0.07586, over 920937.94 frames. ], batch size: 38, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:09:27,414 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.303e+02 1.723e+02 2.031e+02 2.472e+02 3.902e+02, threshold=4.061e+02, percent-clipped=0.0 2023-03-26 07:10:17,354 INFO [finetune.py:976] (6/7) Epoch 7, batch 700, loss[loss=0.2317, simple_loss=0.2925, pruned_loss=0.08546, over 4885.00 frames. ], tot_loss[loss=0.2146, simple_loss=0.2757, pruned_loss=0.0768, over 929635.50 frames. ], batch size: 32, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:10:17,425 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35066.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:10:46,481 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:10:55,787 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5029, 1.5238, 1.8981, 1.2950, 1.6456, 1.7465, 1.4319, 2.0075], device='cuda:6'), covar=tensor([0.1482, 0.2333, 0.1435, 0.2074, 0.1024, 0.1397, 0.3055, 0.0868], device='cuda:6'), in_proj_covar=tensor([0.0205, 0.0206, 0.0199, 0.0197, 0.0182, 0.0221, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:11:11,736 INFO [finetune.py:976] (6/7) Epoch 7, batch 750, loss[loss=0.2327, simple_loss=0.2919, pruned_loss=0.08674, over 4823.00 frames. ], tot_loss[loss=0.2162, simple_loss=0.2773, pruned_loss=0.07757, over 937107.68 frames. ], batch size: 47, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:11:23,579 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.641e+02 2.027e+02 2.429e+02 3.682e+02, threshold=4.054e+02, percent-clipped=0.0 2023-03-26 07:12:01,755 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35153.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:12:14,611 INFO [finetune.py:976] (6/7) Epoch 7, batch 800, loss[loss=0.2592, simple_loss=0.3048, pruned_loss=0.1068, over 4811.00 frames. ], tot_loss[loss=0.2165, simple_loss=0.2773, pruned_loss=0.0778, over 941887.28 frames. ], batch size: 25, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:13:17,536 INFO [finetune.py:976] (6/7) Epoch 7, batch 850, loss[loss=0.1992, simple_loss=0.2619, pruned_loss=0.06827, over 4741.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2749, pruned_loss=0.07706, over 944248.95 frames. ], batch size: 54, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:13:27,729 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.645e+02 1.948e+02 2.227e+02 3.525e+02, threshold=3.897e+02, percent-clipped=0.0 2023-03-26 07:13:33,528 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5430, 2.1487, 1.8642, 1.0166, 2.1389, 1.9637, 1.5800, 1.9579], device='cuda:6'), covar=tensor([0.0786, 0.0940, 0.1642, 0.2048, 0.1378, 0.1807, 0.2159, 0.1091], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0202, 0.0200, 0.0189, 0.0217, 0.0207, 0.0219, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:13:55,194 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35249.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:14:14,656 INFO [finetune.py:976] (6/7) Epoch 7, batch 900, loss[loss=0.2099, simple_loss=0.2619, pruned_loss=0.07899, over 4162.00 frames. ], tot_loss[loss=0.211, simple_loss=0.2712, pruned_loss=0.07537, over 947190.23 frames. ], batch size: 65, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:14:32,803 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35278.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:14:33,773 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 07:14:35,319 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35282.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:14:54,693 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8927, 1.5820, 2.1605, 3.6827, 2.5256, 2.5867, 0.8543, 2.8587], device='cuda:6'), covar=tensor([0.1819, 0.1675, 0.1577, 0.0573, 0.0828, 0.1823, 0.1992, 0.0649], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0118, 0.0136, 0.0166, 0.0102, 0.0141, 0.0129, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 07:14:55,292 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:15:16,599 INFO [finetune.py:976] (6/7) Epoch 7, batch 950, loss[loss=0.2063, simple_loss=0.2673, pruned_loss=0.07264, over 4768.00 frames. ], tot_loss[loss=0.2102, simple_loss=0.2703, pruned_loss=0.07503, over 948662.96 frames. ], batch size: 27, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:15:31,367 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.193e+02 1.515e+02 1.811e+02 2.306e+02 3.628e+02, threshold=3.621e+02, percent-clipped=0.0 2023-03-26 07:15:31,449 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:15:33,894 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35330.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:15:50,905 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35345.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:15:59,753 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4449, 3.8754, 4.0387, 4.2520, 4.1714, 3.9637, 4.5537, 1.3999], device='cuda:6'), covar=tensor([0.0766, 0.0763, 0.0846, 0.0894, 0.1153, 0.1489, 0.0610, 0.5449], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0241, 0.0273, 0.0291, 0.0332, 0.0282, 0.0302, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:16:18,532 INFO [finetune.py:976] (6/7) Epoch 7, batch 1000, loss[loss=0.2141, simple_loss=0.2847, pruned_loss=0.07176, over 4912.00 frames. ], tot_loss[loss=0.212, simple_loss=0.2723, pruned_loss=0.0759, over 950192.52 frames. ], batch size: 37, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:16:18,646 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35366.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:16:27,468 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:17:08,123 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35406.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:17:08,130 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4792, 1.3376, 1.2066, 1.5074, 1.6029, 1.4514, 0.9086, 1.2306], device='cuda:6'), covar=tensor([0.2322, 0.2278, 0.2096, 0.1742, 0.1742, 0.1240, 0.2727, 0.1970], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0209, 0.0204, 0.0186, 0.0238, 0.0177, 0.0213, 0.0191], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:17:17,934 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35414.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:17:19,086 INFO [finetune.py:976] (6/7) Epoch 7, batch 1050, loss[loss=0.2279, simple_loss=0.2763, pruned_loss=0.08972, over 4711.00 frames. ], tot_loss[loss=0.2129, simple_loss=0.2735, pruned_loss=0.07611, over 951414.34 frames. ], batch size: 23, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:17:30,142 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.274e+02 1.696e+02 1.924e+02 2.371e+02 5.787e+02, threshold=3.848e+02, percent-clipped=4.0 2023-03-26 07:17:32,514 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4962, 1.3677, 1.5546, 1.7193, 1.5489, 3.2845, 1.3031, 1.4906], device='cuda:6'), covar=tensor([0.1033, 0.1904, 0.1265, 0.1024, 0.1699, 0.0261, 0.1545, 0.1780], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 07:17:38,372 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1018, 1.9220, 1.6855, 2.1192, 2.5513, 2.0847, 1.8605, 1.5293], device='cuda:6'), covar=tensor([0.2235, 0.2177, 0.2006, 0.1699, 0.1949, 0.1147, 0.2500, 0.1917], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0209, 0.0205, 0.0187, 0.0238, 0.0177, 0.0213, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:17:41,413 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35435.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:17:59,820 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35448.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:18:20,382 INFO [finetune.py:976] (6/7) Epoch 7, batch 1100, loss[loss=0.2599, simple_loss=0.3183, pruned_loss=0.1008, over 4925.00 frames. ], tot_loss[loss=0.2145, simple_loss=0.2754, pruned_loss=0.07679, over 953993.23 frames. ], batch size: 42, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:18:52,325 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5062, 1.3875, 1.9298, 1.2354, 1.6161, 1.7099, 1.3559, 1.9473], device='cuda:6'), covar=tensor([0.1291, 0.2253, 0.1317, 0.1894, 0.0917, 0.1361, 0.2865, 0.0840], device='cuda:6'), in_proj_covar=tensor([0.0206, 0.0207, 0.0201, 0.0199, 0.0184, 0.0223, 0.0221, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:19:22,493 INFO [finetune.py:976] (6/7) Epoch 7, batch 1150, loss[loss=0.1918, simple_loss=0.2439, pruned_loss=0.06989, over 4167.00 frames. ], tot_loss[loss=0.2142, simple_loss=0.2753, pruned_loss=0.07653, over 950405.94 frames. ], batch size: 18, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:19:33,763 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.764e+02 2.068e+02 2.432e+02 4.937e+02, threshold=4.137e+02, percent-clipped=2.0 2023-03-26 07:19:41,277 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:20:03,480 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 07:20:24,974 INFO [finetune.py:976] (6/7) Epoch 7, batch 1200, loss[loss=0.1717, simple_loss=0.2449, pruned_loss=0.04927, over 4794.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2733, pruned_loss=0.07528, over 952506.76 frames. ], batch size: 51, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:20:51,615 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35591.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:21:20,133 INFO [finetune.py:976] (6/7) Epoch 7, batch 1250, loss[loss=0.1863, simple_loss=0.2492, pruned_loss=0.06172, over 4833.00 frames. ], tot_loss[loss=0.2106, simple_loss=0.2712, pruned_loss=0.07499, over 954143.51 frames. ], batch size: 33, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:21:27,305 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9625, 1.4864, 1.8877, 1.7556, 1.5811, 1.6175, 1.7299, 1.7489], device='cuda:6'), covar=tensor([0.4084, 0.5008, 0.4207, 0.4953, 0.5853, 0.4601, 0.5972, 0.4042], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0243, 0.0254, 0.0255, 0.0243, 0.0219, 0.0271, 0.0225], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:21:30,987 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1249, 1.5180, 1.9305, 1.9200, 1.7128, 1.7153, 1.8268, 1.7970], device='cuda:6'), covar=tensor([0.4492, 0.6382, 0.5136, 0.5581, 0.6819, 0.5167, 0.7441, 0.4932], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0243, 0.0254, 0.0255, 0.0243, 0.0219, 0.0272, 0.0226], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:21:31,427 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.265e+02 1.688e+02 2.018e+02 2.672e+02 1.298e+03, threshold=4.035e+02, percent-clipped=4.0 2023-03-26 07:22:18,865 INFO [finetune.py:976] (6/7) Epoch 7, batch 1300, loss[loss=0.1487, simple_loss=0.2153, pruned_loss=0.04109, over 4755.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2681, pruned_loss=0.07345, over 954691.36 frames. ], batch size: 28, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:23:05,040 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6398, 1.5662, 2.0822, 1.9621, 1.7375, 4.1363, 1.4265, 1.6613], device='cuda:6'), covar=tensor([0.1010, 0.1831, 0.1182, 0.1051, 0.1650, 0.0200, 0.1479, 0.1803], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0077, 0.0079, 0.0092, 0.0083, 0.0084, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 07:23:06,118 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35701.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 07:23:25,711 INFO [finetune.py:976] (6/7) Epoch 7, batch 1350, loss[loss=0.1953, simple_loss=0.258, pruned_loss=0.06631, over 4829.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2684, pruned_loss=0.0733, over 955986.02 frames. ], batch size: 40, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:23:38,294 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.109e+01 1.659e+02 1.871e+02 2.249e+02 4.421e+02, threshold=3.743e+02, percent-clipped=1.0 2023-03-26 07:23:45,982 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:24:06,302 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=35748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:24:25,446 INFO [finetune.py:976] (6/7) Epoch 7, batch 1400, loss[loss=0.2076, simple_loss=0.2825, pruned_loss=0.06641, over 4760.00 frames. ], tot_loss[loss=0.2107, simple_loss=0.2724, pruned_loss=0.07451, over 956538.38 frames. ], batch size: 27, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:24:33,499 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35771.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:25:01,856 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=35796.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:25:20,925 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=35811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:25:23,857 INFO [finetune.py:976] (6/7) Epoch 7, batch 1450, loss[loss=0.1944, simple_loss=0.2647, pruned_loss=0.06212, over 4798.00 frames. ], tot_loss[loss=0.212, simple_loss=0.2746, pruned_loss=0.07469, over 955949.26 frames. ], batch size: 51, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:25:33,651 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.737e+02 2.011e+02 2.560e+02 4.083e+02, threshold=4.021e+02, percent-clipped=3.0 2023-03-26 07:25:41,651 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 07:25:43,666 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35832.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:26:24,785 INFO [finetune.py:976] (6/7) Epoch 7, batch 1500, loss[loss=0.2637, simple_loss=0.3196, pruned_loss=0.1039, over 4884.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2756, pruned_loss=0.07551, over 957724.22 frames. ], batch size: 35, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:26:32,645 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=35872.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:26:47,885 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=35886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:27:11,196 INFO [finetune.py:976] (6/7) Epoch 7, batch 1550, loss[loss=0.2332, simple_loss=0.2821, pruned_loss=0.09212, over 4865.00 frames. ], tot_loss[loss=0.2121, simple_loss=0.2743, pruned_loss=0.07498, over 955069.40 frames. ], batch size: 31, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:27:17,686 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.940e+01 1.555e+02 1.902e+02 2.352e+02 4.828e+02, threshold=3.804e+02, percent-clipped=1.0 2023-03-26 07:27:22,794 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-26 07:27:38,574 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8482, 1.1887, 0.9346, 1.7462, 2.1684, 1.5244, 1.6220, 1.7061], device='cuda:6'), covar=tensor([0.1579, 0.2291, 0.2156, 0.1237, 0.2019, 0.1991, 0.1432, 0.2075], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0098, 0.0115, 0.0093, 0.0125, 0.0096, 0.0100, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 07:27:45,068 INFO [finetune.py:976] (6/7) Epoch 7, batch 1600, loss[loss=0.2162, simple_loss=0.2747, pruned_loss=0.07884, over 4933.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2724, pruned_loss=0.07484, over 955279.27 frames. ], batch size: 38, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:28:25,873 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36001.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:28:34,617 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.90 vs. limit=5.0 2023-03-26 07:28:34,932 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4216, 1.3179, 1.9491, 2.8671, 1.8616, 2.2317, 1.0328, 2.4012], device='cuda:6'), covar=tensor([0.1911, 0.1653, 0.1216, 0.0656, 0.0971, 0.1311, 0.1855, 0.0603], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0119, 0.0137, 0.0167, 0.0103, 0.0142, 0.0130, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 07:28:35,456 INFO [finetune.py:976] (6/7) Epoch 7, batch 1650, loss[loss=0.2175, simple_loss=0.2714, pruned_loss=0.08182, over 4833.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2701, pruned_loss=0.07458, over 954504.00 frames. ], batch size: 40, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:28:41,914 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.242e+02 1.569e+02 1.867e+02 2.342e+02 3.778e+02, threshold=3.734e+02, percent-clipped=0.0 2023-03-26 07:28:50,706 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:29:09,239 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36049.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:29:25,209 INFO [finetune.py:976] (6/7) Epoch 7, batch 1700, loss[loss=0.238, simple_loss=0.298, pruned_loss=0.08904, over 4820.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2671, pruned_loss=0.0732, over 956071.59 frames. ], batch size: 39, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:29:39,786 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:30:15,625 INFO [finetune.py:976] (6/7) Epoch 7, batch 1750, loss[loss=0.2208, simple_loss=0.291, pruned_loss=0.07529, over 4928.00 frames. ], tot_loss[loss=0.21, simple_loss=0.2704, pruned_loss=0.07478, over 955545.90 frames. ], batch size: 38, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:30:27,794 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.647e+02 1.960e+02 2.452e+02 4.962e+02, threshold=3.920e+02, percent-clipped=3.0 2023-03-26 07:30:28,498 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36127.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:30:37,965 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4206, 1.3554, 1.6642, 1.8258, 1.4408, 3.0629, 1.2391, 1.4322], device='cuda:6'), covar=tensor([0.1066, 0.1880, 0.1207, 0.0991, 0.1778, 0.0281, 0.1603, 0.1871], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0081, 0.0076, 0.0079, 0.0092, 0.0083, 0.0084, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 07:31:18,547 INFO [finetune.py:976] (6/7) Epoch 7, batch 1800, loss[loss=0.1863, simple_loss=0.2523, pruned_loss=0.06015, over 4762.00 frames. ], tot_loss[loss=0.2129, simple_loss=0.2746, pruned_loss=0.07563, over 955942.01 frames. ], batch size: 28, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:31:19,218 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:31:38,157 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36181.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:31:47,185 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:31:50,722 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3597, 1.4449, 1.4589, 0.8022, 1.5746, 1.5426, 1.4419, 1.3057], device='cuda:6'), covar=tensor([0.0721, 0.0834, 0.0817, 0.1127, 0.0805, 0.0893, 0.0779, 0.1439], device='cuda:6'), in_proj_covar=tensor([0.0139, 0.0134, 0.0144, 0.0127, 0.0114, 0.0145, 0.0146, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:31:58,885 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8978, 1.6769, 1.4781, 1.6689, 1.8979, 1.5663, 2.0913, 1.8184], device='cuda:6'), covar=tensor([0.1655, 0.2978, 0.3798, 0.3140, 0.3016, 0.1994, 0.3449, 0.2212], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0235, 0.0254, 0.0233, 0.0192, 0.0211, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:32:08,671 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1679, 1.2863, 1.3932, 0.5459, 1.1744, 1.5691, 1.5821, 1.2679], device='cuda:6'), covar=tensor([0.1050, 0.0652, 0.0518, 0.0663, 0.0631, 0.0593, 0.0386, 0.0913], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0121, 0.0137, 0.0132, 0.0125, 0.0146, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.7176e-05, 1.1605e-04, 8.7343e-05, 1.0001e-04, 9.5110e-05, 9.1887e-05, 1.0778e-04, 1.0751e-04], device='cuda:6') 2023-03-26 07:32:21,148 INFO [finetune.py:976] (6/7) Epoch 7, batch 1850, loss[loss=0.1574, simple_loss=0.2221, pruned_loss=0.04629, over 4706.00 frames. ], tot_loss[loss=0.2134, simple_loss=0.2753, pruned_loss=0.07582, over 954307.84 frames. ], batch size: 23, lr: 3.88e-03, grad_scale: 16.0 2023-03-26 07:32:33,456 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.736e+02 2.131e+02 2.651e+02 6.216e+02, threshold=4.263e+02, percent-clipped=3.0 2023-03-26 07:32:40,451 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:32:50,350 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36242.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:33:21,421 INFO [finetune.py:976] (6/7) Epoch 7, batch 1900, loss[loss=0.2041, simple_loss=0.2775, pruned_loss=0.06539, over 4921.00 frames. ], tot_loss[loss=0.2113, simple_loss=0.2742, pruned_loss=0.07422, over 955004.53 frames. ], batch size: 41, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:34:25,473 INFO [finetune.py:976] (6/7) Epoch 7, batch 1950, loss[loss=0.1896, simple_loss=0.2492, pruned_loss=0.06498, over 4892.00 frames. ], tot_loss[loss=0.21, simple_loss=0.2726, pruned_loss=0.07364, over 953241.15 frames. ], batch size: 32, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:34:36,923 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.969e+01 1.685e+02 2.051e+02 2.475e+02 4.640e+02, threshold=4.103e+02, percent-clipped=3.0 2023-03-26 07:35:06,115 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 07:35:28,761 INFO [finetune.py:976] (6/7) Epoch 7, batch 2000, loss[loss=0.2257, simple_loss=0.2753, pruned_loss=0.08802, over 4868.00 frames. ], tot_loss[loss=0.2088, simple_loss=0.2706, pruned_loss=0.0735, over 951480.97 frames. ], batch size: 31, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:35:56,715 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 07:36:30,435 INFO [finetune.py:976] (6/7) Epoch 7, batch 2050, loss[loss=0.2157, simple_loss=0.2665, pruned_loss=0.08241, over 4906.00 frames. ], tot_loss[loss=0.2067, simple_loss=0.268, pruned_loss=0.07268, over 951674.16 frames. ], batch size: 36, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:36:43,672 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.910e+01 1.532e+02 1.893e+02 2.218e+02 7.941e+02, threshold=3.786e+02, percent-clipped=2.0 2023-03-26 07:36:44,408 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36427.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:37:34,746 INFO [finetune.py:976] (6/7) Epoch 7, batch 2100, loss[loss=0.1962, simple_loss=0.2483, pruned_loss=0.07204, over 4780.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.2666, pruned_loss=0.07214, over 952351.02 frames. ], batch size: 26, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:37:35,469 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36467.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:37:45,797 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36475.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:38:16,089 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9763, 1.8779, 1.7098, 2.0618, 2.5878, 2.0408, 1.8013, 1.4979], device='cuda:6'), covar=tensor([0.2294, 0.2295, 0.1999, 0.1758, 0.1978, 0.1199, 0.2473, 0.1942], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0209, 0.0204, 0.0186, 0.0239, 0.0177, 0.0213, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:38:36,451 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:38:36,989 INFO [finetune.py:976] (6/7) Epoch 7, batch 2150, loss[loss=0.2455, simple_loss=0.3135, pruned_loss=0.08873, over 4917.00 frames. ], tot_loss[loss=0.2082, simple_loss=0.2698, pruned_loss=0.07331, over 953374.45 frames. ], batch size: 38, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:38:48,001 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.787e+02 2.211e+02 2.590e+02 5.595e+02, threshold=4.423e+02, percent-clipped=4.0 2023-03-26 07:39:05,368 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36537.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:39:35,928 INFO [finetune.py:976] (6/7) Epoch 7, batch 2200, loss[loss=0.2041, simple_loss=0.2642, pruned_loss=0.07203, over 4829.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2717, pruned_loss=0.07394, over 952299.15 frames. ], batch size: 25, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:40:06,232 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9356, 1.9589, 1.9181, 1.2828, 2.0950, 2.2008, 2.1025, 1.6573], device='cuda:6'), covar=tensor([0.0574, 0.0629, 0.0800, 0.1045, 0.0562, 0.0634, 0.0559, 0.1009], device='cuda:6'), in_proj_covar=tensor([0.0140, 0.0135, 0.0145, 0.0128, 0.0115, 0.0147, 0.0148, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:40:25,416 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36605.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:40:27,774 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36608.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:40:35,322 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36612.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:40:38,177 INFO [finetune.py:976] (6/7) Epoch 7, batch 2250, loss[loss=0.2022, simple_loss=0.2771, pruned_loss=0.0637, over 4899.00 frames. ], tot_loss[loss=0.2112, simple_loss=0.2735, pruned_loss=0.07448, over 952260.13 frames. ], batch size: 43, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:40:49,911 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.735e+02 1.950e+02 2.446e+02 5.153e+02, threshold=3.899e+02, percent-clipped=1.0 2023-03-26 07:41:41,216 INFO [finetune.py:976] (6/7) Epoch 7, batch 2300, loss[loss=0.2291, simple_loss=0.2938, pruned_loss=0.08222, over 4855.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.274, pruned_loss=0.07447, over 953437.49 frames. ], batch size: 31, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:41:41,333 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36666.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:41:48,524 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36669.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:41:51,489 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36673.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:42:18,127 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36700.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:42:36,651 INFO [finetune.py:976] (6/7) Epoch 7, batch 2350, loss[loss=0.1994, simple_loss=0.2525, pruned_loss=0.07317, over 4816.00 frames. ], tot_loss[loss=0.2095, simple_loss=0.2718, pruned_loss=0.07366, over 954455.55 frames. ], batch size: 30, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:42:49,194 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.509e+02 1.866e+02 2.321e+02 4.735e+02, threshold=3.732e+02, percent-clipped=2.0 2023-03-26 07:43:14,772 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4975, 1.4548, 2.0115, 1.8027, 1.6425, 3.7380, 1.2711, 1.6228], device='cuda:6'), covar=tensor([0.1207, 0.2224, 0.1455, 0.1208, 0.1751, 0.0244, 0.1892, 0.2115], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0079, 0.0092, 0.0082, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 07:43:34,301 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36761.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 07:43:37,713 INFO [finetune.py:976] (6/7) Epoch 7, batch 2400, loss[loss=0.1948, simple_loss=0.2565, pruned_loss=0.06655, over 4937.00 frames. ], tot_loss[loss=0.2084, simple_loss=0.2698, pruned_loss=0.07347, over 956307.84 frames. ], batch size: 33, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:44:18,714 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4042, 2.3278, 2.3974, 1.7149, 2.5017, 2.4456, 2.3857, 2.0237], device='cuda:6'), covar=tensor([0.0569, 0.0585, 0.0719, 0.0950, 0.0484, 0.0777, 0.0641, 0.0962], device='cuda:6'), in_proj_covar=tensor([0.0137, 0.0133, 0.0143, 0.0126, 0.0113, 0.0144, 0.0146, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:44:28,521 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36805.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:44:40,195 INFO [finetune.py:976] (6/7) Epoch 7, batch 2450, loss[loss=0.1916, simple_loss=0.2608, pruned_loss=0.06123, over 4752.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2662, pruned_loss=0.07242, over 956371.92 frames. ], batch size: 27, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:44:51,698 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.798e+02 2.140e+02 2.594e+02 4.660e+02, threshold=4.281e+02, percent-clipped=3.0 2023-03-26 07:45:10,241 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=36837.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:45:49,126 INFO [finetune.py:976] (6/7) Epoch 7, batch 2500, loss[loss=0.2451, simple_loss=0.3015, pruned_loss=0.09438, over 4808.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2687, pruned_loss=0.0737, over 954073.67 frames. ], batch size: 41, lr: 3.87e-03, grad_scale: 16.0 2023-03-26 07:45:49,285 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36866.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:46:12,543 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=36885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:46:44,105 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36911.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:46:52,299 INFO [finetune.py:976] (6/7) Epoch 7, batch 2550, loss[loss=0.1852, simple_loss=0.2662, pruned_loss=0.05208, over 4818.00 frames. ], tot_loss[loss=0.2103, simple_loss=0.272, pruned_loss=0.07434, over 954325.78 frames. ], batch size: 38, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:47:03,264 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.261e+02 1.620e+02 1.912e+02 2.307e+02 6.491e+02, threshold=3.825e+02, percent-clipped=1.0 2023-03-26 07:47:45,973 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=36958.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:47:47,753 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36961.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:47:55,215 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:47:56,365 INFO [finetune.py:976] (6/7) Epoch 7, batch 2600, loss[loss=0.2395, simple_loss=0.2929, pruned_loss=0.09305, over 4898.00 frames. ], tot_loss[loss=0.2121, simple_loss=0.2739, pruned_loss=0.07514, over 955290.40 frames. ], batch size: 36, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:47:57,656 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=36968.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:48:05,161 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=36972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:48:06,401 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6239, 1.5962, 1.7025, 0.9786, 1.8568, 1.9690, 1.9055, 1.4843], device='cuda:6'), covar=tensor([0.0948, 0.0765, 0.0422, 0.0665, 0.0375, 0.0529, 0.0376, 0.0638], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0155, 0.0119, 0.0136, 0.0131, 0.0123, 0.0144, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.5838e-05, 1.1443e-04, 8.6255e-05, 9.9002e-05, 9.3959e-05, 9.0736e-05, 1.0644e-04, 1.0639e-04], device='cuda:6') 2023-03-26 07:48:07,943 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.61 vs. limit=5.0 2023-03-26 07:48:41,465 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37004.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:48:53,701 INFO [finetune.py:976] (6/7) Epoch 7, batch 2650, loss[loss=0.2361, simple_loss=0.2911, pruned_loss=0.09052, over 4790.00 frames. ], tot_loss[loss=0.2137, simple_loss=0.2756, pruned_loss=0.07592, over 955522.93 frames. ], batch size: 25, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:49:01,168 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37019.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:49:05,454 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.627e+02 1.954e+02 2.393e+02 3.704e+02, threshold=3.907e+02, percent-clipped=0.0 2023-03-26 07:49:13,950 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 07:49:41,697 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37056.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:49:47,743 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:49:48,225 INFO [finetune.py:976] (6/7) Epoch 7, batch 2700, loss[loss=0.1792, simple_loss=0.2448, pruned_loss=0.05683, over 4321.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2744, pruned_loss=0.0747, over 956534.64 frames. ], batch size: 18, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:50:22,104 INFO [finetune.py:976] (6/7) Epoch 7, batch 2750, loss[loss=0.2237, simple_loss=0.2707, pruned_loss=0.08831, over 4812.00 frames. ], tot_loss[loss=0.2094, simple_loss=0.2713, pruned_loss=0.0737, over 955932.39 frames. ], batch size: 40, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:50:28,699 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.629e+02 1.991e+02 2.307e+02 4.303e+02, threshold=3.983e+02, percent-clipped=1.0 2023-03-26 07:50:32,458 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8710, 3.9284, 3.7973, 1.9394, 4.0113, 2.9690, 0.7555, 2.8194], device='cuda:6'), covar=tensor([0.2271, 0.1876, 0.1630, 0.3602, 0.1020, 0.1146, 0.5124, 0.1709], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0173, 0.0163, 0.0129, 0.0154, 0.0123, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 07:50:58,198 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37161.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:51:03,500 INFO [finetune.py:976] (6/7) Epoch 7, batch 2800, loss[loss=0.1674, simple_loss=0.2377, pruned_loss=0.04855, over 4794.00 frames. ], tot_loss[loss=0.2064, simple_loss=0.2677, pruned_loss=0.07261, over 953600.14 frames. ], batch size: 29, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:51:26,884 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9130, 1.0049, 1.6151, 1.5866, 1.5055, 1.4787, 1.4694, 1.5799], device='cuda:6'), covar=tensor([0.5839, 0.7296, 0.7024, 0.6555, 0.8400, 0.6321, 0.8556, 0.6362], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0241, 0.0254, 0.0254, 0.0243, 0.0219, 0.0271, 0.0225], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:52:09,301 INFO [finetune.py:976] (6/7) Epoch 7, batch 2850, loss[loss=0.1632, simple_loss=0.2385, pruned_loss=0.04391, over 4831.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2666, pruned_loss=0.07201, over 954261.44 frames. ], batch size: 25, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:52:20,864 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.582e+02 1.929e+02 2.327e+02 4.539e+02, threshold=3.857e+02, percent-clipped=3.0 2023-03-26 07:53:00,964 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37261.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:02,796 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37264.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:03,918 INFO [finetune.py:976] (6/7) Epoch 7, batch 2900, loss[loss=0.2866, simple_loss=0.3412, pruned_loss=0.116, over 4728.00 frames. ], tot_loss[loss=0.2083, simple_loss=0.2693, pruned_loss=0.07366, over 951659.27 frames. ], batch size: 59, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:53:09,381 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37267.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:10,005 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:53:10,020 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:04,934 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37309.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:06,759 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37312.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:12,684 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37314.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:13,847 INFO [finetune.py:976] (6/7) Epoch 7, batch 2950, loss[loss=0.1913, simple_loss=0.2528, pruned_loss=0.06488, over 4711.00 frames. ], tot_loss[loss=0.2095, simple_loss=0.2711, pruned_loss=0.07394, over 951391.29 frames. ], batch size: 23, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:54:13,923 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37316.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:54:25,441 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.193e+02 1.702e+02 2.045e+02 2.514e+02 5.908e+02, threshold=4.090e+02, percent-clipped=3.0 2023-03-26 07:54:27,383 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:55:00,458 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37356.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:55:05,685 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:55:08,107 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37364.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:55:09,242 INFO [finetune.py:976] (6/7) Epoch 7, batch 3000, loss[loss=0.2061, simple_loss=0.27, pruned_loss=0.07115, over 4722.00 frames. ], tot_loss[loss=0.2118, simple_loss=0.2731, pruned_loss=0.07531, over 949578.63 frames. ], batch size: 54, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:55:09,242 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 07:55:25,743 INFO [finetune.py:1010] (6/7) Epoch 7, validation: loss=0.161, simple_loss=0.2327, pruned_loss=0.04464, over 2265189.00 frames. 2023-03-26 07:55:25,743 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6300MB 2023-03-26 07:55:52,779 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8368, 1.4643, 2.4306, 1.5848, 2.0315, 2.1178, 1.4425, 2.2156], device='cuda:6'), covar=tensor([0.1727, 0.2426, 0.1378, 0.2232, 0.1192, 0.1789, 0.3050, 0.1278], device='cuda:6'), in_proj_covar=tensor([0.0201, 0.0203, 0.0197, 0.0195, 0.0181, 0.0220, 0.0216, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 07:55:54,506 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37404.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 07:56:02,292 INFO [finetune.py:976] (6/7) Epoch 7, batch 3050, loss[loss=0.1883, simple_loss=0.246, pruned_loss=0.0653, over 4813.00 frames. ], tot_loss[loss=0.2117, simple_loss=0.273, pruned_loss=0.07517, over 949656.04 frames. ], batch size: 33, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:56:03,695 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.88 vs. limit=5.0 2023-03-26 07:56:11,546 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=37425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:56:12,026 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.173e+02 1.581e+02 1.871e+02 2.387e+02 4.591e+02, threshold=3.742e+02, percent-clipped=1.0 2023-03-26 07:56:28,352 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.20 vs. limit=5.0 2023-03-26 07:56:48,964 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37461.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 07:56:57,054 INFO [finetune.py:976] (6/7) Epoch 7, batch 3100, loss[loss=0.1472, simple_loss=0.221, pruned_loss=0.03668, over 4765.00 frames. ], tot_loss[loss=0.2093, simple_loss=0.2707, pruned_loss=0.07393, over 950099.04 frames. ], batch size: 28, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:57:52,638 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37509.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 07:58:01,949 INFO [finetune.py:976] (6/7) Epoch 7, batch 3150, loss[loss=0.1916, simple_loss=0.2502, pruned_loss=0.06649, over 4824.00 frames. ], tot_loss[loss=0.2076, simple_loss=0.2686, pruned_loss=0.07326, over 951930.99 frames. ], batch size: 30, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:58:13,089 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.704e+02 2.041e+02 2.515e+02 5.799e+02, threshold=4.081e+02, percent-clipped=3.0 2023-03-26 07:58:34,944 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 07:59:05,776 INFO [finetune.py:976] (6/7) Epoch 7, batch 3200, loss[loss=0.2334, simple_loss=0.2886, pruned_loss=0.08907, over 4750.00 frames. ], tot_loss[loss=0.2045, simple_loss=0.2652, pruned_loss=0.07192, over 952058.32 frames. ], batch size: 54, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 07:59:06,456 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 07:59:14,013 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.9224, 4.2631, 4.5339, 4.6919, 4.6292, 4.4001, 5.0147, 1.4630], device='cuda:6'), covar=tensor([0.0683, 0.0823, 0.0644, 0.1026, 0.1111, 0.1476, 0.0550, 0.5827], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0243, 0.0275, 0.0293, 0.0332, 0.0283, 0.0304, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:00:06,578 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 08:00:08,909 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37614.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:00:14,413 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37615.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:00:14,948 INFO [finetune.py:976] (6/7) Epoch 7, batch 3250, loss[loss=0.1786, simple_loss=0.2406, pruned_loss=0.0583, over 4789.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.2668, pruned_loss=0.07255, over 952064.70 frames. ], batch size: 26, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:00:24,939 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:00:26,116 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.664e+02 1.918e+02 2.274e+02 4.430e+02, threshold=3.836e+02, percent-clipped=1.0 2023-03-26 08:01:09,572 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37660.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:01:10,724 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37662.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:01:18,265 INFO [finetune.py:976] (6/7) Epoch 7, batch 3300, loss[loss=0.207, simple_loss=0.2739, pruned_loss=0.07004, over 4892.00 frames. ], tot_loss[loss=0.2107, simple_loss=0.272, pruned_loss=0.07474, over 954070.65 frames. ], batch size: 35, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:02:12,468 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:02:22,967 INFO [finetune.py:976] (6/7) Epoch 7, batch 3350, loss[loss=0.2204, simple_loss=0.275, pruned_loss=0.08288, over 4761.00 frames. ], tot_loss[loss=0.2115, simple_loss=0.2735, pruned_loss=0.07473, over 955639.99 frames. ], batch size: 27, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:02:25,479 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=37720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:02:34,111 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.166e+02 1.767e+02 2.019e+02 2.457e+02 5.992e+02, threshold=4.038e+02, percent-clipped=4.0 2023-03-26 08:03:05,888 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7072, 1.4155, 2.0401, 1.4416, 1.8167, 1.9103, 1.4561, 2.0077], device='cuda:6'), covar=tensor([0.1160, 0.2265, 0.1191, 0.1527, 0.0985, 0.1206, 0.3091, 0.0860], device='cuda:6'), in_proj_covar=tensor([0.0202, 0.0203, 0.0198, 0.0195, 0.0181, 0.0220, 0.0217, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:03:28,100 INFO [finetune.py:976] (6/7) Epoch 7, batch 3400, loss[loss=0.2579, simple_loss=0.3225, pruned_loss=0.09665, over 4923.00 frames. ], tot_loss[loss=0.2133, simple_loss=0.2754, pruned_loss=0.07562, over 956307.73 frames. ], batch size: 38, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:04:32,122 INFO [finetune.py:976] (6/7) Epoch 7, batch 3450, loss[loss=0.2278, simple_loss=0.2787, pruned_loss=0.08839, over 4906.00 frames. ], tot_loss[loss=0.2119, simple_loss=0.2743, pruned_loss=0.0748, over 955838.21 frames. ], batch size: 37, lr: 3.87e-03, grad_scale: 32.0 2023-03-26 08:04:32,327 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.63 vs. limit=5.0 2023-03-26 08:04:43,339 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.715e+02 1.991e+02 2.496e+02 6.747e+02, threshold=3.982e+02, percent-clipped=3.0 2023-03-26 08:05:36,359 INFO [finetune.py:976] (6/7) Epoch 7, batch 3500, loss[loss=0.2501, simple_loss=0.2975, pruned_loss=0.1014, over 4819.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.272, pruned_loss=0.0738, over 956367.24 frames. ], batch size: 39, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:06:41,166 INFO [finetune.py:976] (6/7) Epoch 7, batch 3550, loss[loss=0.1825, simple_loss=0.2489, pruned_loss=0.05804, over 4760.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2691, pruned_loss=0.07297, over 955314.94 frames. ], batch size: 27, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:06:57,391 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=37924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:06:58,533 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.559e+02 1.846e+02 2.185e+02 4.242e+02, threshold=3.693e+02, percent-clipped=1.0 2023-03-26 08:07:08,809 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8174, 1.6507, 2.2037, 1.5728, 2.1791, 2.2295, 1.6103, 2.3274], device='cuda:6'), covar=tensor([0.1504, 0.2136, 0.1465, 0.1993, 0.0906, 0.1460, 0.2852, 0.0787], device='cuda:6'), in_proj_covar=tensor([0.0203, 0.0204, 0.0198, 0.0195, 0.0182, 0.0221, 0.0218, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:07:52,118 INFO [finetune.py:976] (6/7) Epoch 7, batch 3600, loss[loss=0.2735, simple_loss=0.3335, pruned_loss=0.1068, over 4733.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2664, pruned_loss=0.07224, over 954541.52 frames. ], batch size: 59, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:08:01,699 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=37972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:08:03,446 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2605, 1.9300, 2.5511, 4.1905, 2.8360, 2.7764, 1.0717, 3.3522], device='cuda:6'), covar=tensor([0.1675, 0.1417, 0.1525, 0.0448, 0.0791, 0.1399, 0.1905, 0.0449], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0164, 0.0101, 0.0139, 0.0128, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 08:08:06,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=37979.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:08:25,660 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7953, 1.7020, 1.4238, 1.4944, 1.6094, 1.5726, 1.6387, 2.2868], device='cuda:6'), covar=tensor([0.5455, 0.5380, 0.4220, 0.5011, 0.4660, 0.2981, 0.4824, 0.2108], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0222, 0.0281, 0.0241, 0.0206, 0.0245, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:08:59,067 INFO [finetune.py:976] (6/7) Epoch 7, batch 3650, loss[loss=0.2782, simple_loss=0.3321, pruned_loss=0.1121, over 4722.00 frames. ], tot_loss[loss=0.2091, simple_loss=0.2702, pruned_loss=0.074, over 956287.89 frames. ], batch size: 54, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:09:07,234 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=38020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:09:10,799 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.699e+02 2.068e+02 2.418e+02 4.148e+02, threshold=4.136e+02, percent-clipped=4.0 2023-03-26 08:09:27,947 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:09:46,796 INFO [finetune.py:976] (6/7) Epoch 7, batch 3700, loss[loss=0.2779, simple_loss=0.3155, pruned_loss=0.1202, over 4888.00 frames. ], tot_loss[loss=0.2112, simple_loss=0.2731, pruned_loss=0.07469, over 956369.16 frames. ], batch size: 32, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:09:48,562 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=38068.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:09:52,152 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5062, 1.3875, 1.3988, 1.5010, 1.0632, 2.9274, 1.1657, 1.6639], device='cuda:6'), covar=tensor([0.3753, 0.2679, 0.2246, 0.2573, 0.1955, 0.0285, 0.2729, 0.1347], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0116, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 08:10:19,917 INFO [finetune.py:976] (6/7) Epoch 7, batch 3750, loss[loss=0.2701, simple_loss=0.3323, pruned_loss=0.1039, over 4903.00 frames. ], tot_loss[loss=0.2132, simple_loss=0.2751, pruned_loss=0.07562, over 956711.38 frames. ], batch size: 36, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:10:24,747 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2876, 2.4765, 2.1594, 1.6832, 2.3051, 2.6408, 2.4406, 2.0977], device='cuda:6'), covar=tensor([0.0633, 0.0566, 0.0849, 0.0969, 0.0941, 0.0677, 0.0634, 0.1029], device='cuda:6'), in_proj_covar=tensor([0.0137, 0.0134, 0.0144, 0.0127, 0.0114, 0.0146, 0.0147, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:10:26,926 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.737e+01 1.627e+02 1.982e+02 2.503e+02 4.763e+02, threshold=3.965e+02, percent-clipped=1.0 2023-03-26 08:10:57,174 INFO [finetune.py:976] (6/7) Epoch 7, batch 3800, loss[loss=0.1858, simple_loss=0.2567, pruned_loss=0.05742, over 4813.00 frames. ], tot_loss[loss=0.2135, simple_loss=0.2757, pruned_loss=0.07568, over 956358.74 frames. ], batch size: 30, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:11:30,371 INFO [finetune.py:976] (6/7) Epoch 7, batch 3850, loss[loss=0.2248, simple_loss=0.2808, pruned_loss=0.08442, over 4924.00 frames. ], tot_loss[loss=0.2098, simple_loss=0.2722, pruned_loss=0.07375, over 954717.21 frames. ], batch size: 37, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:11:43,045 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.610e+02 2.090e+02 2.406e+02 4.877e+02, threshold=4.181e+02, percent-clipped=2.0 2023-03-26 08:12:25,295 INFO [finetune.py:976] (6/7) Epoch 7, batch 3900, loss[loss=0.14, simple_loss=0.2103, pruned_loss=0.03487, over 4782.00 frames. ], tot_loss[loss=0.2075, simple_loss=0.2693, pruned_loss=0.07281, over 954551.22 frames. ], batch size: 29, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:13:28,058 INFO [finetune.py:976] (6/7) Epoch 7, batch 3950, loss[loss=0.2142, simple_loss=0.2671, pruned_loss=0.08065, over 4850.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.2651, pruned_loss=0.0708, over 955425.24 frames. ], batch size: 49, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:13:36,672 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 08:13:45,512 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.693e+02 1.988e+02 2.374e+02 4.679e+02, threshold=3.976e+02, percent-clipped=1.0 2023-03-26 08:13:56,390 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=38335.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:14:21,687 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=38360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:14:22,893 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4518, 1.2207, 1.2694, 1.4301, 1.5817, 1.5609, 1.3514, 1.2010], device='cuda:6'), covar=tensor([0.0287, 0.0297, 0.0616, 0.0291, 0.0264, 0.0509, 0.0336, 0.0424], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0111, 0.0139, 0.0115, 0.0104, 0.0100, 0.0090, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.9013e-05, 8.7069e-05, 1.1080e-04, 9.0577e-05, 8.1872e-05, 7.4156e-05, 6.8558e-05, 8.4605e-05], device='cuda:6') 2023-03-26 08:14:25,159 INFO [finetune.py:976] (6/7) Epoch 7, batch 4000, loss[loss=0.2807, simple_loss=0.3198, pruned_loss=0.1208, over 4167.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2645, pruned_loss=0.07124, over 954428.50 frames. ], batch size: 65, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:14:44,853 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2892, 2.0693, 2.0177, 2.2984, 2.8834, 2.2209, 2.0517, 1.7017], device='cuda:6'), covar=tensor([0.2338, 0.2153, 0.1893, 0.1747, 0.1904, 0.1114, 0.2380, 0.1945], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0210, 0.0205, 0.0187, 0.0240, 0.0178, 0.0215, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:15:11,482 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5803, 1.7081, 1.9077, 1.0906, 1.8375, 2.0718, 2.1820, 1.6451], device='cuda:6'), covar=tensor([0.1027, 0.0723, 0.0387, 0.0614, 0.0435, 0.0493, 0.0242, 0.0612], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0155, 0.0120, 0.0136, 0.0131, 0.0123, 0.0144, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.5533e-05, 1.1440e-04, 8.6682e-05, 9.9336e-05, 9.3648e-05, 9.0309e-05, 1.0601e-04, 1.0645e-04], device='cuda:6') 2023-03-26 08:15:29,791 INFO [finetune.py:976] (6/7) Epoch 7, batch 4050, loss[loss=0.2084, simple_loss=0.28, pruned_loss=0.06842, over 4909.00 frames. ], tot_loss[loss=0.2071, simple_loss=0.2685, pruned_loss=0.07281, over 955725.11 frames. ], batch size: 37, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:15:33,992 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:15:42,068 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.248e+02 1.778e+02 2.129e+02 2.625e+02 5.238e+02, threshold=4.258e+02, percent-clipped=5.0 2023-03-26 08:15:51,143 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-26 08:16:32,429 INFO [finetune.py:976] (6/7) Epoch 7, batch 4100, loss[loss=0.1976, simple_loss=0.2699, pruned_loss=0.06268, over 4848.00 frames. ], tot_loss[loss=0.2088, simple_loss=0.2709, pruned_loss=0.0733, over 956638.99 frames. ], batch size: 44, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:16:38,902 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6245, 1.4535, 1.5164, 1.5647, 1.1575, 3.3839, 1.3034, 1.8268], device='cuda:6'), covar=tensor([0.3351, 0.2408, 0.2044, 0.2232, 0.1851, 0.0185, 0.2739, 0.1341], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0116, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0003, 0.0005, 0.0004], device='cuda:6') 2023-03-26 08:17:31,592 INFO [finetune.py:976] (6/7) Epoch 7, batch 4150, loss[loss=0.2265, simple_loss=0.2977, pruned_loss=0.07765, over 4921.00 frames. ], tot_loss[loss=0.2095, simple_loss=0.2718, pruned_loss=0.07357, over 955011.31 frames. ], batch size: 42, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:17:42,162 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5209, 2.3053, 1.8526, 2.4810, 2.3229, 2.1697, 2.1647, 3.2552], device='cuda:6'), covar=tensor([0.5117, 0.6329, 0.4948, 0.5914, 0.5163, 0.3629, 0.5751, 0.2207], device='cuda:6'), in_proj_covar=tensor([0.0283, 0.0259, 0.0220, 0.0281, 0.0240, 0.0205, 0.0245, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:17:43,682 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.723e+02 2.145e+02 2.598e+02 6.605e+02, threshold=4.291e+02, percent-clipped=2.0 2023-03-26 08:18:34,213 INFO [finetune.py:976] (6/7) Epoch 7, batch 4200, loss[loss=0.1774, simple_loss=0.2326, pruned_loss=0.06107, over 4714.00 frames. ], tot_loss[loss=0.2092, simple_loss=0.2717, pruned_loss=0.07337, over 955018.79 frames. ], batch size: 23, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:19:34,114 INFO [finetune.py:976] (6/7) Epoch 7, batch 4250, loss[loss=0.2051, simple_loss=0.2694, pruned_loss=0.07036, over 4759.00 frames. ], tot_loss[loss=0.2076, simple_loss=0.2699, pruned_loss=0.07269, over 955033.12 frames. ], batch size: 28, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:19:44,842 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.606e+02 1.980e+02 2.259e+02 5.740e+02, threshold=3.960e+02, percent-clipped=2.0 2023-03-26 08:20:02,314 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=38635.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:20:38,696 INFO [finetune.py:976] (6/7) Epoch 7, batch 4300, loss[loss=0.2023, simple_loss=0.2649, pruned_loss=0.0698, over 4940.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.2666, pruned_loss=0.0716, over 955788.92 frames. ], batch size: 33, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:20:59,383 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=38683.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:21:41,240 INFO [finetune.py:976] (6/7) Epoch 7, batch 4350, loss[loss=0.1772, simple_loss=0.2483, pruned_loss=0.05303, over 4821.00 frames. ], tot_loss[loss=0.2014, simple_loss=0.2627, pruned_loss=0.07007, over 957064.53 frames. ], batch size: 30, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:21:41,328 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=38716.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:21:52,348 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.679e+02 1.871e+02 2.197e+02 5.866e+02, threshold=3.741e+02, percent-clipped=4.0 2023-03-26 08:22:43,656 INFO [finetune.py:976] (6/7) Epoch 7, batch 4400, loss[loss=0.1981, simple_loss=0.2836, pruned_loss=0.05633, over 4827.00 frames. ], tot_loss[loss=0.2017, simple_loss=0.2634, pruned_loss=0.07002, over 957054.62 frames. ], batch size: 40, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:23:24,012 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 08:23:42,202 INFO [finetune.py:976] (6/7) Epoch 7, batch 4450, loss[loss=0.1963, simple_loss=0.2715, pruned_loss=0.06055, over 4927.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.2679, pruned_loss=0.07142, over 956106.82 frames. ], batch size: 38, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:23:51,335 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1803, 1.7875, 1.8314, 0.7357, 2.0712, 2.3143, 1.9091, 1.8626], device='cuda:6'), covar=tensor([0.0960, 0.0870, 0.0491, 0.0875, 0.0516, 0.0567, 0.0475, 0.0664], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0155, 0.0120, 0.0137, 0.0131, 0.0124, 0.0145, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.5913e-05, 1.1453e-04, 8.6817e-05, 9.9614e-05, 9.4225e-05, 9.1118e-05, 1.0650e-04, 1.0727e-04], device='cuda:6') 2023-03-26 08:23:51,903 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=38823.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:23:53,601 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.712e+02 1.965e+02 2.330e+02 4.727e+02, threshold=3.929e+02, percent-clipped=4.0 2023-03-26 08:24:44,296 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8975, 1.9944, 1.5991, 1.5914, 2.2523, 2.2298, 2.0110, 1.8218], device='cuda:6'), covar=tensor([0.0372, 0.0322, 0.0618, 0.0354, 0.0356, 0.0520, 0.0329, 0.0417], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0111, 0.0139, 0.0115, 0.0104, 0.0100, 0.0091, 0.0109], device='cuda:6'), out_proj_covar=tensor([6.9273e-05, 8.7326e-05, 1.1147e-04, 9.0416e-05, 8.1644e-05, 7.4376e-05, 6.8668e-05, 8.5073e-05], device='cuda:6') 2023-03-26 08:24:44,784 INFO [finetune.py:976] (6/7) Epoch 7, batch 4500, loss[loss=0.2376, simple_loss=0.2944, pruned_loss=0.0904, over 4900.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2703, pruned_loss=0.07294, over 953577.85 frames. ], batch size: 36, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:24:57,278 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0003, 1.8278, 1.4741, 1.6302, 1.9370, 1.6407, 2.1209, 1.8979], device='cuda:6'), covar=tensor([0.1696, 0.2738, 0.3927, 0.3373, 0.2903, 0.2137, 0.3547, 0.2444], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0190, 0.0234, 0.0253, 0.0233, 0.0192, 0.0211, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:25:06,355 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=38884.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:25:49,099 INFO [finetune.py:976] (6/7) Epoch 7, batch 4550, loss[loss=0.2191, simple_loss=0.2968, pruned_loss=0.07068, over 4797.00 frames. ], tot_loss[loss=0.2109, simple_loss=0.2733, pruned_loss=0.07427, over 954262.73 frames. ], batch size: 29, lr: 3.86e-03, grad_scale: 64.0 2023-03-26 08:25:59,506 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.671e+02 2.000e+02 2.524e+02 3.434e+02, threshold=4.000e+02, percent-clipped=0.0 2023-03-26 08:26:47,279 INFO [finetune.py:976] (6/7) Epoch 7, batch 4600, loss[loss=0.1974, simple_loss=0.2682, pruned_loss=0.06327, over 4787.00 frames. ], tot_loss[loss=0.2093, simple_loss=0.2719, pruned_loss=0.07332, over 954266.86 frames. ], batch size: 29, lr: 3.86e-03, grad_scale: 64.0 2023-03-26 08:27:54,980 INFO [finetune.py:976] (6/7) Epoch 7, batch 4650, loss[loss=0.2369, simple_loss=0.2934, pruned_loss=0.09017, over 4793.00 frames. ], tot_loss[loss=0.2077, simple_loss=0.2695, pruned_loss=0.07297, over 953749.69 frames. ], batch size: 51, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:27:55,086 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:28:06,171 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.761e+01 1.505e+02 1.924e+02 2.345e+02 4.238e+02, threshold=3.847e+02, percent-clipped=2.0 2023-03-26 08:28:28,258 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7321, 3.5998, 3.3545, 1.6641, 3.6920, 2.7613, 0.7077, 2.5342], device='cuda:6'), covar=tensor([0.2446, 0.1585, 0.1518, 0.3342, 0.0968, 0.0965, 0.4629, 0.1482], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0173, 0.0162, 0.0129, 0.0154, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 08:28:56,283 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39064.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:28:57,995 INFO [finetune.py:976] (6/7) Epoch 7, batch 4700, loss[loss=0.1961, simple_loss=0.2601, pruned_loss=0.06598, over 4819.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.2665, pruned_loss=0.07165, over 954173.77 frames. ], batch size: 25, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:29:56,018 INFO [finetune.py:976] (6/7) Epoch 7, batch 4750, loss[loss=0.2028, simple_loss=0.2709, pruned_loss=0.0673, over 4796.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.2645, pruned_loss=0.07105, over 952276.40 frames. ], batch size: 51, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:30:08,813 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.221e+02 1.609e+02 1.819e+02 2.206e+02 4.512e+02, threshold=3.638e+02, percent-clipped=2.0 2023-03-26 08:30:58,730 INFO [finetune.py:976] (6/7) Epoch 7, batch 4800, loss[loss=0.2289, simple_loss=0.2833, pruned_loss=0.08722, over 4884.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2676, pruned_loss=0.07281, over 951717.54 frames. ], batch size: 31, lr: 3.86e-03, grad_scale: 32.0 2023-03-26 08:31:12,899 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39179.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:31:22,031 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39185.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:31:31,454 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8382, 1.1961, 1.7078, 1.7000, 1.5020, 1.5131, 1.6293, 1.5998], device='cuda:6'), covar=tensor([0.4966, 0.6024, 0.5068, 0.5240, 0.6573, 0.4976, 0.6588, 0.4688], device='cuda:6'), in_proj_covar=tensor([0.0229, 0.0242, 0.0254, 0.0253, 0.0244, 0.0220, 0.0272, 0.0225], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:31:57,083 INFO [finetune.py:976] (6/7) Epoch 7, batch 4850, loss[loss=0.2219, simple_loss=0.2776, pruned_loss=0.08314, over 4887.00 frames. ], tot_loss[loss=0.2091, simple_loss=0.2712, pruned_loss=0.0735, over 950882.61 frames. ], batch size: 35, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:32:06,049 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.691e+02 2.004e+02 2.499e+02 4.240e+02, threshold=4.008e+02, percent-clipped=2.0 2023-03-26 08:32:18,162 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:32:30,567 INFO [finetune.py:976] (6/7) Epoch 7, batch 4900, loss[loss=0.1974, simple_loss=0.2734, pruned_loss=0.06066, over 4821.00 frames. ], tot_loss[loss=0.2114, simple_loss=0.2736, pruned_loss=0.07463, over 953814.45 frames. ], batch size: 33, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:32:43,182 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39284.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:33:03,517 INFO [finetune.py:976] (6/7) Epoch 7, batch 4950, loss[loss=0.1789, simple_loss=0.2498, pruned_loss=0.05402, over 4877.00 frames. ], tot_loss[loss=0.2125, simple_loss=0.2749, pruned_loss=0.07507, over 952170.09 frames. ], batch size: 34, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:33:12,727 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.619e+02 1.980e+02 2.423e+02 3.796e+02, threshold=3.961e+02, percent-clipped=0.0 2023-03-26 08:33:24,232 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39345.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:33:37,224 INFO [finetune.py:976] (6/7) Epoch 7, batch 5000, loss[loss=0.1852, simple_loss=0.2356, pruned_loss=0.06746, over 4741.00 frames. ], tot_loss[loss=0.2096, simple_loss=0.2722, pruned_loss=0.07355, over 951746.70 frames. ], batch size: 59, lr: 3.86e-03, grad_scale: 16.0 2023-03-26 08:34:10,935 INFO [finetune.py:976] (6/7) Epoch 7, batch 5050, loss[loss=0.2203, simple_loss=0.2871, pruned_loss=0.07676, over 4937.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2698, pruned_loss=0.07307, over 953026.80 frames. ], batch size: 33, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:34:19,594 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.623e+02 1.955e+02 2.404e+02 3.498e+02, threshold=3.910e+02, percent-clipped=0.0 2023-03-26 08:34:22,060 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6631, 1.4940, 1.4565, 1.6498, 1.0167, 3.3937, 1.3199, 1.8268], device='cuda:6'), covar=tensor([0.3404, 0.2543, 0.2158, 0.2258, 0.2053, 0.0211, 0.2697, 0.1327], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0115, 0.0120, 0.0123, 0.0117, 0.0099, 0.0101, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 08:34:49,059 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8467, 1.2727, 0.8721, 1.6997, 2.0952, 1.1209, 1.4703, 1.5864], device='cuda:6'), covar=tensor([0.1282, 0.1863, 0.1718, 0.1063, 0.1673, 0.1944, 0.1301, 0.1762], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0122, 0.0095, 0.0099, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 08:34:52,721 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39463.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:34:54,413 INFO [finetune.py:976] (6/7) Epoch 7, batch 5100, loss[loss=0.1936, simple_loss=0.2593, pruned_loss=0.06392, over 4788.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.266, pruned_loss=0.07187, over 952926.72 frames. ], batch size: 29, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:35:05,351 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:35:39,562 INFO [finetune.py:976] (6/7) Epoch 7, batch 5150, loss[loss=0.1758, simple_loss=0.2538, pruned_loss=0.04893, over 4908.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2671, pruned_loss=0.07278, over 952764.42 frames. ], batch size: 37, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:35:41,565 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5133, 1.3977, 1.5359, 0.8476, 1.6208, 1.5350, 1.4340, 1.3151], device='cuda:6'), covar=tensor([0.0639, 0.0761, 0.0694, 0.0984, 0.0719, 0.0865, 0.0736, 0.1342], device='cuda:6'), in_proj_covar=tensor([0.0137, 0.0134, 0.0144, 0.0126, 0.0113, 0.0145, 0.0146, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:35:47,011 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:35:49,282 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39527.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:35:49,815 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.405e+01 1.710e+02 2.016e+02 2.412e+02 5.054e+02, threshold=4.032e+02, percent-clipped=2.0 2023-03-26 08:36:08,544 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39541.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:36:24,638 INFO [finetune.py:976] (6/7) Epoch 7, batch 5200, loss[loss=0.2815, simple_loss=0.3309, pruned_loss=0.116, over 4809.00 frames. ], tot_loss[loss=0.2087, simple_loss=0.2704, pruned_loss=0.07345, over 953326.52 frames. ], batch size: 51, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:36:35,828 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.08 vs. limit=5.0 2023-03-26 08:37:07,821 INFO [finetune.py:976] (6/7) Epoch 7, batch 5250, loss[loss=0.2396, simple_loss=0.292, pruned_loss=0.09358, over 4197.00 frames. ], tot_loss[loss=0.2102, simple_loss=0.2721, pruned_loss=0.07409, over 952119.29 frames. ], batch size: 65, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:37:15,021 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.709e+02 2.070e+02 2.577e+02 5.953e+02, threshold=4.140e+02, percent-clipped=1.0 2023-03-26 08:37:25,988 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39639.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 08:37:26,611 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:37:43,681 INFO [finetune.py:976] (6/7) Epoch 7, batch 5300, loss[loss=0.1793, simple_loss=0.2449, pruned_loss=0.05684, over 4838.00 frames. ], tot_loss[loss=0.2099, simple_loss=0.2721, pruned_loss=0.07384, over 952711.80 frames. ], batch size: 49, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:37:52,267 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.77 vs. limit=5.0 2023-03-26 08:38:22,981 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39700.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 08:38:24,950 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 08:38:32,618 INFO [finetune.py:976] (6/7) Epoch 7, batch 5350, loss[loss=0.2298, simple_loss=0.2794, pruned_loss=0.09012, over 4716.00 frames. ], tot_loss[loss=0.2101, simple_loss=0.2728, pruned_loss=0.07366, over 954155.80 frames. ], batch size: 59, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:38:40,832 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.565e+02 1.855e+02 2.323e+02 5.491e+02, threshold=3.710e+02, percent-clipped=1.0 2023-03-26 08:39:15,940 INFO [finetune.py:976] (6/7) Epoch 7, batch 5400, loss[loss=0.1966, simple_loss=0.2573, pruned_loss=0.06799, over 4783.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.2701, pruned_loss=0.07229, over 956531.10 frames. ], batch size: 29, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:39:16,675 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39767.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:39:50,814 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=39815.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:39:51,311 INFO [finetune.py:976] (6/7) Epoch 7, batch 5450, loss[loss=0.2211, simple_loss=0.2885, pruned_loss=0.07683, over 4846.00 frames. ], tot_loss[loss=0.2057, simple_loss=0.2678, pruned_loss=0.07179, over 956105.71 frames. ], batch size: 47, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:39:53,199 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:40:03,634 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.339e+01 1.599e+02 1.928e+02 2.299e+02 3.698e+02, threshold=3.856e+02, percent-clipped=0.0 2023-03-26 08:40:03,743 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39828.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:40:21,681 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39841.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:40:54,098 INFO [finetune.py:976] (6/7) Epoch 7, batch 5500, loss[loss=0.1724, simple_loss=0.2402, pruned_loss=0.05234, over 4757.00 frames. ], tot_loss[loss=0.2028, simple_loss=0.2644, pruned_loss=0.07062, over 954300.16 frames. ], batch size: 26, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:41:05,333 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=39876.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:41:18,432 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39889.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:41:57,092 INFO [finetune.py:976] (6/7) Epoch 7, batch 5550, loss[loss=0.2874, simple_loss=0.3299, pruned_loss=0.1224, over 4826.00 frames. ], tot_loss[loss=0.2054, simple_loss=0.2669, pruned_loss=0.07191, over 954960.14 frames. ], batch size: 47, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:42:09,835 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.647e+02 1.995e+02 2.278e+02 3.177e+02, threshold=3.991e+02, percent-clipped=0.0 2023-03-26 08:42:28,009 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=39940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:42:48,260 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8825, 1.0140, 1.7933, 1.7223, 1.5595, 1.5578, 1.5364, 1.6624], device='cuda:6'), covar=tensor([0.4306, 0.5804, 0.4956, 0.5105, 0.6272, 0.4791, 0.6790, 0.4468], device='cuda:6'), in_proj_covar=tensor([0.0231, 0.0243, 0.0256, 0.0254, 0.0246, 0.0222, 0.0274, 0.0227], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:42:58,496 INFO [finetune.py:976] (6/7) Epoch 7, batch 5600, loss[loss=0.2448, simple_loss=0.3062, pruned_loss=0.09176, over 4818.00 frames. ], tot_loss[loss=0.2091, simple_loss=0.2707, pruned_loss=0.07376, over 953091.33 frames. ], batch size: 40, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:43:20,786 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=39988.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:43:30,217 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=39995.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 08:43:43,019 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 08:43:51,133 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4474, 1.2935, 1.4009, 1.3403, 0.8662, 2.2803, 0.7992, 1.2919], device='cuda:6'), covar=tensor([0.3680, 0.2719, 0.2291, 0.2592, 0.2164, 0.0378, 0.2784, 0.1499], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0115, 0.0120, 0.0123, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 08:43:52,203 INFO [finetune.py:976] (6/7) Epoch 7, batch 5650, loss[loss=0.1849, simple_loss=0.2558, pruned_loss=0.05697, over 4851.00 frames. ], tot_loss[loss=0.2099, simple_loss=0.2723, pruned_loss=0.07377, over 954325.62 frames. ], batch size: 31, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:44:02,440 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9810, 1.7666, 1.7970, 2.0480, 2.4106, 2.0263, 1.5073, 1.7010], device='cuda:6'), covar=tensor([0.2076, 0.2145, 0.1917, 0.1580, 0.1516, 0.1143, 0.2529, 0.1951], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0211, 0.0206, 0.0188, 0.0241, 0.0179, 0.0216, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:44:09,055 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.647e+02 1.995e+02 2.469e+02 4.643e+02, threshold=3.989e+02, percent-clipped=3.0 2023-03-26 08:44:50,696 INFO [finetune.py:976] (6/7) Epoch 7, batch 5700, loss[loss=0.1966, simple_loss=0.2359, pruned_loss=0.07865, over 3604.00 frames. ], tot_loss[loss=0.2084, simple_loss=0.2693, pruned_loss=0.07378, over 938307.12 frames. ], batch size: 15, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:44:50,802 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3629, 1.0719, 1.1669, 1.1779, 1.4787, 1.4648, 1.3194, 1.1460], device='cuda:6'), covar=tensor([0.0300, 0.0343, 0.0539, 0.0310, 0.0279, 0.0359, 0.0263, 0.0425], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0112, 0.0140, 0.0116, 0.0104, 0.0101, 0.0091, 0.0110], device='cuda:6'), out_proj_covar=tensor([6.9955e-05, 8.8215e-05, 1.1193e-04, 9.1151e-05, 8.2129e-05, 7.5079e-05, 6.9132e-05, 8.5177e-05], device='cuda:6') 2023-03-26 08:45:12,921 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8834, 1.6608, 1.9504, 1.2785, 2.0381, 1.9820, 1.8576, 1.4022], device='cuda:6'), covar=tensor([0.0694, 0.0971, 0.0769, 0.1114, 0.0707, 0.0718, 0.0813, 0.1784], device='cuda:6'), in_proj_covar=tensor([0.0137, 0.0134, 0.0145, 0.0127, 0.0114, 0.0145, 0.0146, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:45:42,070 INFO [finetune.py:976] (6/7) Epoch 8, batch 0, loss[loss=0.2271, simple_loss=0.2904, pruned_loss=0.08186, over 4752.00 frames. ], tot_loss[loss=0.2271, simple_loss=0.2904, pruned_loss=0.08186, over 4752.00 frames. ], batch size: 26, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:45:42,071 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 08:45:49,849 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4698, 1.2601, 1.2877, 1.3963, 1.6342, 1.5250, 1.4059, 1.2585], device='cuda:6'), covar=tensor([0.0324, 0.0306, 0.0634, 0.0282, 0.0263, 0.0490, 0.0325, 0.0410], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0113, 0.0140, 0.0116, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.0022e-05, 8.8500e-05, 1.1228e-04, 9.1282e-05, 8.2386e-05, 7.5241e-05, 6.9340e-05, 8.5588e-05], device='cuda:6') 2023-03-26 08:45:57,866 INFO [finetune.py:1010] (6/7) Epoch 8, validation: loss=0.1624, simple_loss=0.234, pruned_loss=0.04544, over 2265189.00 frames. 2023-03-26 08:45:57,866 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6300MB 2023-03-26 08:46:20,316 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40119.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:46:20,725 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 08:46:26,527 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40123.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:46:29,511 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.580e+02 2.018e+02 2.508e+02 5.130e+02, threshold=4.036e+02, percent-clipped=1.0 2023-03-26 08:46:41,216 INFO [finetune.py:976] (6/7) Epoch 8, batch 50, loss[loss=0.236, simple_loss=0.2903, pruned_loss=0.09088, over 4920.00 frames. ], tot_loss[loss=0.2158, simple_loss=0.2776, pruned_loss=0.07704, over 216373.25 frames. ], batch size: 41, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:46:44,273 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6755, 1.5295, 1.6126, 1.7363, 0.9500, 3.5707, 1.2839, 1.9313], device='cuda:6'), covar=tensor([0.3386, 0.2474, 0.2051, 0.2224, 0.1980, 0.0178, 0.2720, 0.1343], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0116, 0.0120, 0.0123, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 08:47:08,173 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:10,679 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40171.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:13,611 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 08:47:14,329 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7404, 3.9690, 3.7974, 2.2034, 4.0691, 3.0690, 1.0286, 2.8995], device='cuda:6'), covar=tensor([0.2736, 0.2001, 0.1684, 0.3040, 0.0969, 0.1014, 0.4609, 0.1452], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0172, 0.0161, 0.0130, 0.0155, 0.0122, 0.0145, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 08:47:26,490 INFO [finetune.py:976] (6/7) Epoch 8, batch 100, loss[loss=0.1778, simple_loss=0.2403, pruned_loss=0.0577, over 4923.00 frames. ], tot_loss[loss=0.2058, simple_loss=0.2663, pruned_loss=0.07269, over 377265.16 frames. ], batch size: 38, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:47:27,195 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40195.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:32,210 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2308, 1.4346, 1.5437, 0.8047, 1.2499, 1.6845, 1.7001, 1.4296], device='cuda:6'), covar=tensor([0.0963, 0.0784, 0.0522, 0.0564, 0.0589, 0.0729, 0.0400, 0.0767], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0121, 0.0137, 0.0133, 0.0126, 0.0146, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.6892e-05, 1.1612e-04, 8.7611e-05, 1.0006e-04, 9.5540e-05, 9.2499e-05, 1.0733e-04, 1.0818e-04], device='cuda:6') 2023-03-26 08:47:42,321 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40218.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:47:48,308 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.570e+02 1.832e+02 2.394e+02 3.868e+02, threshold=3.663e+02, percent-clipped=0.0 2023-03-26 08:47:59,301 INFO [finetune.py:976] (6/7) Epoch 8, batch 150, loss[loss=0.1725, simple_loss=0.2296, pruned_loss=0.05772, over 4817.00 frames. ], tot_loss[loss=0.2017, simple_loss=0.2614, pruned_loss=0.07096, over 504618.38 frames. ], batch size: 40, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:48:07,722 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40256.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:18,412 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6434, 1.4969, 1.1188, 0.3048, 1.2794, 1.4681, 1.4009, 1.4910], device='cuda:6'), covar=tensor([0.0982, 0.0791, 0.1385, 0.2054, 0.1392, 0.2397, 0.2270, 0.0784], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0201, 0.0203, 0.0188, 0.0219, 0.0207, 0.0224, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:48:18,414 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:22,653 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40279.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:33,050 INFO [finetune.py:976] (6/7) Epoch 8, batch 200, loss[loss=0.2762, simple_loss=0.3286, pruned_loss=0.1119, over 4742.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2636, pruned_loss=0.0725, over 606180.56 frames. ], batch size: 54, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:48:33,764 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40295.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 08:48:55,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.652e+02 1.957e+02 2.371e+02 3.958e+02, threshold=3.914e+02, percent-clipped=3.0 2023-03-26 08:48:57,115 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40330.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:48:58,962 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:49:05,958 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40343.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 08:49:06,467 INFO [finetune.py:976] (6/7) Epoch 8, batch 250, loss[loss=0.1795, simple_loss=0.2271, pruned_loss=0.06593, over 4124.00 frames. ], tot_loss[loss=0.21, simple_loss=0.2699, pruned_loss=0.07504, over 680698.82 frames. ], batch size: 17, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:49:37,937 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40391.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:49:40,040 INFO [finetune.py:976] (6/7) Epoch 8, batch 300, loss[loss=0.1937, simple_loss=0.2445, pruned_loss=0.07145, over 4775.00 frames. ], tot_loss[loss=0.2082, simple_loss=0.27, pruned_loss=0.07317, over 742847.90 frames. ], batch size: 26, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:49:42,028 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3562, 2.2128, 1.8154, 2.2793, 2.1435, 2.0869, 2.0327, 3.1474], device='cuda:6'), covar=tensor([0.5689, 0.6814, 0.4822, 0.6557, 0.5714, 0.3392, 0.6071, 0.1962], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0258, 0.0220, 0.0280, 0.0241, 0.0205, 0.0244, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:50:05,000 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:50:07,992 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.680e+02 2.022e+02 2.440e+02 4.521e+02, threshold=4.043e+02, percent-clipped=1.0 2023-03-26 08:50:27,981 INFO [finetune.py:976] (6/7) Epoch 8, batch 350, loss[loss=0.2217, simple_loss=0.2867, pruned_loss=0.07831, over 4811.00 frames. ], tot_loss[loss=0.2086, simple_loss=0.2709, pruned_loss=0.07316, over 789782.54 frames. ], batch size: 25, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:51:01,401 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:01,450 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:26,251 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:27,353 INFO [finetune.py:976] (6/7) Epoch 8, batch 400, loss[loss=0.2179, simple_loss=0.2831, pruned_loss=0.07634, over 4713.00 frames. ], tot_loss[loss=0.2111, simple_loss=0.2732, pruned_loss=0.07448, over 828891.09 frames. ], batch size: 54, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:51:28,096 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7903, 0.7690, 1.6718, 1.5886, 1.5206, 1.4877, 1.4221, 1.5603], device='cuda:6'), covar=tensor([0.3916, 0.4917, 0.4847, 0.4687, 0.5623, 0.4322, 0.5634, 0.4139], device='cuda:6'), in_proj_covar=tensor([0.0231, 0.0242, 0.0256, 0.0254, 0.0245, 0.0222, 0.0273, 0.0227], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:51:52,934 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40519.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:51:58,843 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.662e+02 2.008e+02 2.590e+02 4.107e+02, threshold=4.016e+02, percent-clipped=2.0 2023-03-26 08:52:07,726 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 08:52:11,109 INFO [finetune.py:976] (6/7) Epoch 8, batch 450, loss[loss=0.1812, simple_loss=0.2552, pruned_loss=0.0536, over 4911.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.2725, pruned_loss=0.07419, over 857095.76 frames. ], batch size: 37, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:52:17,023 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2075, 1.7155, 2.0273, 2.0231, 1.7316, 1.7793, 1.8770, 1.8217], device='cuda:6'), covar=tensor([0.4892, 0.6067, 0.4762, 0.5788, 0.7250, 0.5122, 0.7358, 0.4758], device='cuda:6'), in_proj_covar=tensor([0.0231, 0.0242, 0.0256, 0.0254, 0.0246, 0.0222, 0.0273, 0.0227], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:52:21,180 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:52:26,847 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40553.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:52:41,178 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40574.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:52:54,277 INFO [finetune.py:976] (6/7) Epoch 8, batch 500, loss[loss=0.2025, simple_loss=0.254, pruned_loss=0.07553, over 4804.00 frames. ], tot_loss[loss=0.2083, simple_loss=0.2696, pruned_loss=0.07351, over 878382.48 frames. ], batch size: 29, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:53:17,850 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.648e+02 1.946e+02 2.379e+02 4.476e+02, threshold=3.892e+02, percent-clipped=1.0 2023-03-26 08:53:17,930 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40628.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:53:28,117 INFO [finetune.py:976] (6/7) Epoch 8, batch 550, loss[loss=0.1283, simple_loss=0.1826, pruned_loss=0.03703, over 2809.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2659, pruned_loss=0.07133, over 894699.66 frames. ], batch size: 11, lr: 3.85e-03, grad_scale: 16.0 2023-03-26 08:53:29,484 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-26 08:53:30,676 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9427, 1.7204, 2.2583, 2.0256, 2.0802, 4.5417, 1.9622, 2.1161], device='cuda:6'), covar=tensor([0.0833, 0.1685, 0.0987, 0.0965, 0.1416, 0.0170, 0.1212, 0.1608], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 08:53:32,543 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40651.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:53:56,756 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40686.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:54:01,538 INFO [finetune.py:976] (6/7) Epoch 8, batch 600, loss[loss=0.2361, simple_loss=0.2958, pruned_loss=0.08824, over 4870.00 frames. ], tot_loss[loss=0.206, simple_loss=0.2673, pruned_loss=0.07239, over 905261.05 frames. ], batch size: 34, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:54:11,167 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3214, 2.1740, 1.8233, 0.9904, 1.9938, 1.8491, 1.6479, 2.0960], device='cuda:6'), covar=tensor([0.0806, 0.0708, 0.1381, 0.1896, 0.1269, 0.2013, 0.1937, 0.0834], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0201, 0.0188, 0.0218, 0.0206, 0.0222, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:54:14,590 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40712.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:54:24,591 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.757e+02 2.080e+02 2.524e+02 4.426e+02, threshold=4.160e+02, percent-clipped=1.0 2023-03-26 08:54:34,712 INFO [finetune.py:976] (6/7) Epoch 8, batch 650, loss[loss=0.1754, simple_loss=0.251, pruned_loss=0.04995, over 4916.00 frames. ], tot_loss[loss=0.208, simple_loss=0.2697, pruned_loss=0.07314, over 914709.01 frames. ], batch size: 38, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:55:08,422 INFO [finetune.py:976] (6/7) Epoch 8, batch 700, loss[loss=0.1562, simple_loss=0.231, pruned_loss=0.04074, over 4809.00 frames. ], tot_loss[loss=0.2086, simple_loss=0.2706, pruned_loss=0.07328, over 921827.48 frames. ], batch size: 45, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:55:28,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9572, 1.8363, 1.7929, 2.0157, 1.4781, 4.6475, 1.5667, 2.2586], device='cuda:6'), covar=tensor([0.3289, 0.2403, 0.1996, 0.2167, 0.1671, 0.0099, 0.2407, 0.1334], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0116, 0.0120, 0.0123, 0.0117, 0.0099, 0.0101, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 08:55:31,876 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.288e+02 1.702e+02 1.948e+02 2.422e+02 4.930e+02, threshold=3.896e+02, percent-clipped=3.0 2023-03-26 08:55:51,723 INFO [finetune.py:976] (6/7) Epoch 8, batch 750, loss[loss=0.2375, simple_loss=0.2992, pruned_loss=0.08789, over 4797.00 frames. ], tot_loss[loss=0.2104, simple_loss=0.2731, pruned_loss=0.07383, over 931351.30 frames. ], batch size: 45, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:55:59,250 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=40848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:01,140 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:20,355 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:32,141 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40874.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:56:55,831 INFO [finetune.py:976] (6/7) Epoch 8, batch 800, loss[loss=0.2426, simple_loss=0.3023, pruned_loss=0.09145, over 4907.00 frames. ], tot_loss[loss=0.2095, simple_loss=0.2726, pruned_loss=0.07325, over 936773.38 frames. ], batch size: 36, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:57:03,992 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:04,649 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:13,893 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3145, 5.1324, 4.8656, 3.2668, 5.1975, 4.0359, 1.5114, 3.9097], device='cuda:6'), covar=tensor([0.2203, 0.1444, 0.1516, 0.2680, 0.0726, 0.0823, 0.4382, 0.1295], device='cuda:6'), in_proj_covar=tensor([0.0156, 0.0174, 0.0163, 0.0131, 0.0158, 0.0124, 0.0148, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 08:57:24,031 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40922.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:26,522 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40926.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:27,573 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.606e+02 1.985e+02 2.397e+02 9.945e+02, threshold=3.971e+02, percent-clipped=3.0 2023-03-26 08:57:27,688 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40928.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:57:45,602 INFO [finetune.py:976] (6/7) Epoch 8, batch 850, loss[loss=0.235, simple_loss=0.2807, pruned_loss=0.09465, over 4786.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2691, pruned_loss=0.07209, over 941806.87 frames. ], batch size: 28, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:58:00,393 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=40961.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 08:58:09,359 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2712, 2.2289, 2.2239, 1.6489, 2.3555, 2.3826, 2.2419, 1.9306], device='cuda:6'), covar=tensor([0.0622, 0.0643, 0.0741, 0.0939, 0.0518, 0.0751, 0.0698, 0.1085], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0132, 0.0142, 0.0125, 0.0113, 0.0143, 0.0144, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:58:10,972 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=40976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:18,126 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=40986.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:22,822 INFO [finetune.py:976] (6/7) Epoch 8, batch 900, loss[loss=0.1975, simple_loss=0.2605, pruned_loss=0.06721, over 4778.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.267, pruned_loss=0.07159, over 942677.68 frames. ], batch size: 28, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:58:25,185 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=40997.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:31,508 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41007.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:36,961 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 08:58:46,165 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.525e+02 1.868e+02 2.283e+02 3.598e+02, threshold=3.736e+02, percent-clipped=0.0 2023-03-26 08:58:50,358 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41034.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:58:53,952 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0954, 2.0183, 1.5585, 2.0477, 2.0021, 1.7874, 2.4218, 2.0458], device='cuda:6'), covar=tensor([0.1444, 0.2503, 0.3409, 0.2824, 0.2798, 0.1757, 0.3342, 0.2110], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0191, 0.0236, 0.0256, 0.0237, 0.0194, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 08:58:56,855 INFO [finetune.py:976] (6/7) Epoch 8, batch 950, loss[loss=0.2353, simple_loss=0.2964, pruned_loss=0.08707, over 4812.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2654, pruned_loss=0.07111, over 946461.44 frames. ], batch size: 45, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:58:57,580 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41045.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:06,036 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:30,583 INFO [finetune.py:976] (6/7) Epoch 8, batch 1000, loss[loss=0.2245, simple_loss=0.2864, pruned_loss=0.08128, over 4133.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2683, pruned_loss=0.07199, over 948759.02 frames. ], batch size: 65, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 08:59:36,004 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:38,378 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41106.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 08:59:52,966 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.652e+02 2.000e+02 2.359e+02 4.809e+02, threshold=4.000e+02, percent-clipped=2.0 2023-03-26 09:00:04,080 INFO [finetune.py:976] (6/7) Epoch 8, batch 1050, loss[loss=0.1652, simple_loss=0.2478, pruned_loss=0.04133, over 4765.00 frames. ], tot_loss[loss=0.2074, simple_loss=0.2702, pruned_loss=0.07234, over 950331.60 frames. ], batch size: 28, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:00:06,633 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:08,676 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-26 09:00:16,288 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41163.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:37,446 INFO [finetune.py:976] (6/7) Epoch 8, batch 1100, loss[loss=0.2133, simple_loss=0.2743, pruned_loss=0.07616, over 4896.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2711, pruned_loss=0.07258, over 951814.19 frames. ], batch size: 43, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:00:38,742 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41196.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:49,535 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 09:00:54,982 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41221.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:00:59,686 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.750e+02 2.155e+02 2.664e+02 4.791e+02, threshold=4.309e+02, percent-clipped=2.0 2023-03-26 09:01:17,459 INFO [finetune.py:976] (6/7) Epoch 8, batch 1150, loss[loss=0.2231, simple_loss=0.2834, pruned_loss=0.08143, over 4814.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2704, pruned_loss=0.07161, over 953070.65 frames. ], batch size: 38, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:01:32,474 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41256.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 09:02:14,604 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2555, 1.5443, 0.6050, 2.0592, 2.5843, 1.8045, 2.0053, 2.0047], device='cuda:6'), covar=tensor([0.1456, 0.2074, 0.2363, 0.1170, 0.1855, 0.1972, 0.1360, 0.2140], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0097, 0.0115, 0.0093, 0.0124, 0.0096, 0.0101, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 09:02:15,113 INFO [finetune.py:976] (6/7) Epoch 8, batch 1200, loss[loss=0.1634, simple_loss=0.2304, pruned_loss=0.04817, over 4755.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2698, pruned_loss=0.07189, over 951232.81 frames. ], batch size: 27, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:02:24,087 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41307.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:02:35,935 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 09:02:37,271 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.297e+01 1.635e+02 1.914e+02 2.289e+02 4.123e+02, threshold=3.829e+02, percent-clipped=0.0 2023-03-26 09:02:39,174 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5880, 3.5034, 3.3549, 1.6201, 3.6029, 2.5635, 0.8142, 2.5378], device='cuda:6'), covar=tensor([0.2651, 0.1879, 0.1724, 0.3498, 0.1102, 0.1175, 0.4483, 0.1595], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0173, 0.0162, 0.0131, 0.0157, 0.0124, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 09:02:51,371 INFO [finetune.py:976] (6/7) Epoch 8, batch 1250, loss[loss=0.1868, simple_loss=0.258, pruned_loss=0.05777, over 4817.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.2676, pruned_loss=0.0711, over 953035.49 frames. ], batch size: 30, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:03:02,690 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41353.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:03:04,420 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:03:15,768 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5610, 1.8728, 1.5022, 1.4920, 1.9250, 1.8718, 1.7264, 1.7269], device='cuda:6'), covar=tensor([0.0446, 0.0304, 0.0457, 0.0307, 0.0287, 0.0485, 0.0327, 0.0353], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0112, 0.0140, 0.0117, 0.0105, 0.0101, 0.0092, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.0471e-05, 8.7964e-05, 1.1228e-04, 9.1968e-05, 8.2790e-05, 7.4977e-05, 6.9247e-05, 8.5643e-05], device='cuda:6') 2023-03-26 09:03:32,985 INFO [finetune.py:976] (6/7) Epoch 8, batch 1300, loss[loss=0.2533, simple_loss=0.2996, pruned_loss=0.1035, over 4174.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2638, pruned_loss=0.06956, over 955122.09 frames. ], batch size: 65, lr: 3.84e-03, grad_scale: 32.0 2023-03-26 09:03:37,904 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41401.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:03:56,242 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.674e+02 1.900e+02 2.309e+02 4.379e+02, threshold=3.799e+02, percent-clipped=1.0 2023-03-26 09:04:06,253 INFO [finetune.py:976] (6/7) Epoch 8, batch 1350, loss[loss=0.2155, simple_loss=0.2841, pruned_loss=0.07351, over 4900.00 frames. ], tot_loss[loss=0.2026, simple_loss=0.265, pruned_loss=0.0701, over 956695.82 frames. ], batch size: 37, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:04:16,297 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=41458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:04:39,877 INFO [finetune.py:976] (6/7) Epoch 8, batch 1400, loss[loss=0.1779, simple_loss=0.2523, pruned_loss=0.05178, over 4915.00 frames. ], tot_loss[loss=0.2045, simple_loss=0.2675, pruned_loss=0.07072, over 957566.90 frames. ], batch size: 36, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:04:42,241 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7446, 1.2007, 0.9242, 1.6251, 2.0401, 1.4372, 1.5507, 1.5659], device='cuda:6'), covar=tensor([0.1522, 0.2135, 0.2037, 0.1253, 0.2062, 0.1985, 0.1437, 0.1997], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 09:04:58,587 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41521.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:05:00,392 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3179, 2.1597, 1.9988, 1.2893, 2.1576, 1.8517, 1.7158, 2.0473], device='cuda:6'), covar=tensor([0.0971, 0.0664, 0.1190, 0.1602, 0.1366, 0.1689, 0.1797, 0.0910], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0200, 0.0203, 0.0189, 0.0219, 0.0207, 0.0223, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:05:03,328 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.704e+02 2.004e+02 2.444e+02 3.700e+02, threshold=4.008e+02, percent-clipped=0.0 2023-03-26 09:05:12,589 INFO [finetune.py:976] (6/7) Epoch 8, batch 1450, loss[loss=0.2616, simple_loss=0.2872, pruned_loss=0.118, over 4111.00 frames. ], tot_loss[loss=0.206, simple_loss=0.2693, pruned_loss=0.07129, over 955424.77 frames. ], batch size: 18, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:05:19,677 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 09:05:21,978 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41556.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 09:05:30,810 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:05:46,318 INFO [finetune.py:976] (6/7) Epoch 8, batch 1500, loss[loss=0.2133, simple_loss=0.2946, pruned_loss=0.066, over 4922.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.2713, pruned_loss=0.07242, over 955496.70 frames. ], batch size: 42, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:05:53,554 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41604.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:06:10,824 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.231e+02 1.635e+02 1.924e+02 2.365e+02 3.634e+02, threshold=3.848e+02, percent-clipped=0.0 2023-03-26 09:06:22,448 INFO [finetune.py:976] (6/7) Epoch 8, batch 1550, loss[loss=0.1634, simple_loss=0.2356, pruned_loss=0.04556, over 4827.00 frames. ], tot_loss[loss=0.2072, simple_loss=0.2712, pruned_loss=0.07159, over 956819.67 frames. ], batch size: 30, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:06:34,437 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41653.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:06:35,050 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41654.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:07:17,348 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8351, 1.8302, 1.8795, 1.1246, 1.8879, 1.9819, 1.8319, 1.5372], device='cuda:6'), covar=tensor([0.0561, 0.0659, 0.0596, 0.0874, 0.0574, 0.0611, 0.0561, 0.1212], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0130, 0.0140, 0.0124, 0.0112, 0.0141, 0.0142, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:07:19,720 INFO [finetune.py:976] (6/7) Epoch 8, batch 1600, loss[loss=0.2089, simple_loss=0.2707, pruned_loss=0.07358, over 4922.00 frames. ], tot_loss[loss=0.2064, simple_loss=0.2695, pruned_loss=0.07161, over 957819.32 frames. ], batch size: 38, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:07:25,549 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:07:25,577 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:07:39,591 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41715.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 09:07:43,372 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 09:07:48,911 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.581e+02 1.949e+02 2.490e+02 4.755e+02, threshold=3.899e+02, percent-clipped=2.0 2023-03-26 09:07:58,439 INFO [finetune.py:976] (6/7) Epoch 8, batch 1650, loss[loss=0.2008, simple_loss=0.2581, pruned_loss=0.07172, over 4914.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2659, pruned_loss=0.07, over 958318.21 frames. ], batch size: 37, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:08:00,743 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 09:08:01,534 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41749.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:08:09,838 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=41758.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:08:42,691 INFO [finetune.py:976] (6/7) Epoch 8, batch 1700, loss[loss=0.1809, simple_loss=0.2434, pruned_loss=0.05919, over 3993.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2636, pruned_loss=0.06921, over 957348.16 frames. ], batch size: 17, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:08:50,792 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=41806.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:09:06,690 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.763e+02 2.034e+02 2.335e+02 4.675e+02, threshold=4.069e+02, percent-clipped=2.0 2023-03-26 09:09:16,758 INFO [finetune.py:976] (6/7) Epoch 8, batch 1750, loss[loss=0.2119, simple_loss=0.2845, pruned_loss=0.06968, over 4809.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2648, pruned_loss=0.06951, over 957430.06 frames. ], batch size: 51, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:09:50,592 INFO [finetune.py:976] (6/7) Epoch 8, batch 1800, loss[loss=0.2208, simple_loss=0.2802, pruned_loss=0.08069, over 4876.00 frames. ], tot_loss[loss=0.204, simple_loss=0.2677, pruned_loss=0.07015, over 958115.83 frames. ], batch size: 34, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:09:58,019 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:10:13,606 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.792e+02 2.103e+02 2.633e+02 4.479e+02, threshold=4.207e+02, percent-clipped=2.0 2023-03-26 09:10:23,653 INFO [finetune.py:976] (6/7) Epoch 8, batch 1850, loss[loss=0.2142, simple_loss=0.2786, pruned_loss=0.07485, over 4814.00 frames. ], tot_loss[loss=0.2069, simple_loss=0.2706, pruned_loss=0.07153, over 958587.30 frames. ], batch size: 40, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:10:26,677 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=41948.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 09:10:29,162 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1283, 1.9864, 1.6311, 1.7831, 2.0109, 1.7522, 2.3238, 2.0588], device='cuda:6'), covar=tensor([0.1369, 0.2390, 0.3385, 0.3237, 0.2896, 0.1875, 0.4305, 0.1896], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0188, 0.0232, 0.0252, 0.0234, 0.0192, 0.0209, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:10:30,731 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-26 09:10:38,721 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=41967.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:10:56,824 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9284, 1.9143, 1.9008, 1.3044, 2.0117, 2.0995, 1.9703, 1.6196], device='cuda:6'), covar=tensor([0.0563, 0.0661, 0.0713, 0.0899, 0.0561, 0.0617, 0.0592, 0.1052], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0131, 0.0141, 0.0124, 0.0112, 0.0141, 0.0143, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:10:57,329 INFO [finetune.py:976] (6/7) Epoch 8, batch 1900, loss[loss=0.2025, simple_loss=0.2818, pruned_loss=0.0616, over 4800.00 frames. ], tot_loss[loss=0.2078, simple_loss=0.2718, pruned_loss=0.07193, over 957717.38 frames. ], batch size: 51, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:11:00,989 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5986, 1.4201, 1.3418, 1.5855, 1.9053, 1.6329, 1.1665, 1.3435], device='cuda:6'), covar=tensor([0.2475, 0.2641, 0.2351, 0.2023, 0.1977, 0.1439, 0.3092, 0.2319], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0210, 0.0204, 0.0188, 0.0239, 0.0178, 0.0214, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:11:08,244 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42009.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 09:11:08,813 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42010.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:11:08,875 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7971, 1.7093, 1.4029, 1.4947, 1.6004, 1.5995, 1.6254, 2.3033], device='cuda:6'), covar=tensor([0.5538, 0.5511, 0.4089, 0.5121, 0.4727, 0.3014, 0.4966, 0.2118], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0259, 0.0221, 0.0281, 0.0241, 0.0206, 0.0245, 0.0207], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:11:09,004 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 09:11:22,120 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.556e+02 1.920e+02 2.218e+02 3.872e+02, threshold=3.841e+02, percent-clipped=0.0 2023-03-26 09:11:32,122 INFO [finetune.py:976] (6/7) Epoch 8, batch 1950, loss[loss=0.1603, simple_loss=0.2332, pruned_loss=0.04368, over 4801.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.2702, pruned_loss=0.07056, over 958793.55 frames. ], batch size: 29, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:11:40,206 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9100, 1.7593, 1.6693, 1.6178, 2.0710, 2.1336, 1.9458, 1.5612], device='cuda:6'), covar=tensor([0.0236, 0.0301, 0.0477, 0.0309, 0.0218, 0.0365, 0.0221, 0.0374], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0111, 0.0139, 0.0116, 0.0104, 0.0100, 0.0091, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.0002e-05, 8.7364e-05, 1.1112e-04, 9.1401e-05, 8.1623e-05, 7.4276e-05, 6.8648e-05, 8.4679e-05], device='cuda:6') 2023-03-26 09:12:30,961 INFO [finetune.py:976] (6/7) Epoch 8, batch 2000, loss[loss=0.1845, simple_loss=0.2446, pruned_loss=0.06218, over 4913.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.267, pruned_loss=0.06987, over 959696.65 frames. ], batch size: 36, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:12:42,910 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3524, 2.1452, 1.7170, 0.7173, 1.9355, 1.9129, 1.6879, 1.9874], device='cuda:6'), covar=tensor([0.0937, 0.0726, 0.1574, 0.2112, 0.1434, 0.2107, 0.2186, 0.0925], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0202, 0.0204, 0.0189, 0.0221, 0.0208, 0.0224, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:12:52,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6827, 1.5378, 1.4117, 1.5241, 1.2162, 3.6835, 1.5431, 2.1107], device='cuda:6'), covar=tensor([0.4195, 0.3026, 0.2540, 0.2989, 0.1844, 0.0235, 0.2438, 0.1227], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0115, 0.0119, 0.0123, 0.0117, 0.0099, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:12:56,440 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.506e+02 1.840e+02 2.176e+02 3.856e+02, threshold=3.679e+02, percent-clipped=1.0 2023-03-26 09:13:06,568 INFO [finetune.py:976] (6/7) Epoch 8, batch 2050, loss[loss=0.1428, simple_loss=0.2239, pruned_loss=0.03082, over 4755.00 frames. ], tot_loss[loss=0.2006, simple_loss=0.2632, pruned_loss=0.06895, over 959429.78 frames. ], batch size: 26, lr: 3.84e-03, grad_scale: 16.0 2023-03-26 09:13:24,478 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8856, 1.8020, 1.6325, 1.9988, 2.2860, 2.0264, 1.3697, 1.5975], device='cuda:6'), covar=tensor([0.2167, 0.2061, 0.1865, 0.1689, 0.1760, 0.1069, 0.2821, 0.1926], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0210, 0.0205, 0.0188, 0.0239, 0.0178, 0.0214, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:13:53,281 INFO [finetune.py:976] (6/7) Epoch 8, batch 2100, loss[loss=0.2465, simple_loss=0.3102, pruned_loss=0.09143, over 4808.00 frames. ], tot_loss[loss=0.2002, simple_loss=0.2627, pruned_loss=0.06886, over 957918.55 frames. ], batch size: 41, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:14:16,280 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.710e+02 1.945e+02 2.376e+02 4.149e+02, threshold=3.889e+02, percent-clipped=2.0 2023-03-26 09:14:25,918 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0861, 0.9081, 0.9690, 0.1958, 0.8371, 1.1006, 1.1004, 0.9342], device='cuda:6'), covar=tensor([0.1124, 0.0739, 0.0541, 0.0783, 0.0651, 0.0660, 0.0529, 0.0882], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0155, 0.0120, 0.0135, 0.0131, 0.0125, 0.0144, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.5358e-05, 1.1405e-04, 8.6593e-05, 9.8297e-05, 9.3728e-05, 9.1507e-05, 1.0615e-04, 1.0756e-04], device='cuda:6') 2023-03-26 09:14:26,997 INFO [finetune.py:976] (6/7) Epoch 8, batch 2150, loss[loss=0.1848, simple_loss=0.2508, pruned_loss=0.05942, over 4792.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2684, pruned_loss=0.07186, over 953824.76 frames. ], batch size: 29, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:14:38,879 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42262.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:15:18,054 INFO [finetune.py:976] (6/7) Epoch 8, batch 2200, loss[loss=0.1517, simple_loss=0.2303, pruned_loss=0.03652, over 4792.00 frames. ], tot_loss[loss=0.2082, simple_loss=0.2712, pruned_loss=0.07256, over 955294.41 frames. ], batch size: 29, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:15:25,348 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42304.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 09:15:29,026 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42310.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 09:15:45,966 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.543e+02 1.921e+02 2.479e+02 5.347e+02, threshold=3.843e+02, percent-clipped=1.0 2023-03-26 09:16:07,396 INFO [finetune.py:976] (6/7) Epoch 8, batch 2250, loss[loss=0.2081, simple_loss=0.2792, pruned_loss=0.06848, over 4866.00 frames. ], tot_loss[loss=0.2099, simple_loss=0.2729, pruned_loss=0.07346, over 954660.27 frames. ], batch size: 34, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:16:27,427 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=42358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:16:38,066 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 09:16:47,903 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42379.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:17:09,028 INFO [finetune.py:976] (6/7) Epoch 8, batch 2300, loss[loss=0.1841, simple_loss=0.2564, pruned_loss=0.05589, over 4919.00 frames. ], tot_loss[loss=0.2083, simple_loss=0.272, pruned_loss=0.07232, over 953537.10 frames. ], batch size: 38, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:17:39,797 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4056, 2.2510, 1.8392, 2.4280, 2.3160, 1.9468, 2.8627, 2.3833], device='cuda:6'), covar=tensor([0.1488, 0.2930, 0.3713, 0.3436, 0.2935, 0.1964, 0.3359, 0.2167], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0190, 0.0235, 0.0255, 0.0236, 0.0194, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:17:57,222 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.935e+01 1.473e+02 1.816e+02 2.175e+02 3.275e+02, threshold=3.633e+02, percent-clipped=0.0 2023-03-26 09:18:06,489 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:18:09,265 INFO [finetune.py:976] (6/7) Epoch 8, batch 2350, loss[loss=0.2264, simple_loss=0.2804, pruned_loss=0.08625, over 4815.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2685, pruned_loss=0.07108, over 952946.49 frames. ], batch size: 40, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:18:45,542 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.01 vs. limit=2.0 2023-03-26 09:18:48,781 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 09:18:51,872 INFO [finetune.py:976] (6/7) Epoch 8, batch 2400, loss[loss=0.1403, simple_loss=0.2151, pruned_loss=0.03274, over 4763.00 frames. ], tot_loss[loss=0.2017, simple_loss=0.2645, pruned_loss=0.0694, over 949465.89 frames. ], batch size: 28, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:19:02,422 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42506.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:19:14,244 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 09:19:15,714 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7614, 1.2631, 0.8982, 1.7971, 2.3123, 1.3918, 1.5794, 1.5885], device='cuda:6'), covar=tensor([0.1549, 0.2330, 0.2156, 0.1225, 0.1807, 0.1908, 0.1633, 0.2150], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0101, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 09:19:25,682 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.517e+02 1.798e+02 2.223e+02 5.682e+02, threshold=3.597e+02, percent-clipped=2.0 2023-03-26 09:19:35,402 INFO [finetune.py:976] (6/7) Epoch 8, batch 2450, loss[loss=0.2483, simple_loss=0.3097, pruned_loss=0.09343, over 4739.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2616, pruned_loss=0.0687, over 949374.27 frames. ], batch size: 59, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:19:42,667 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7419, 0.6468, 1.7491, 1.5620, 1.5002, 1.4733, 1.3890, 1.6232], device='cuda:6'), covar=tensor([0.4313, 0.5322, 0.4347, 0.4473, 0.5528, 0.4164, 0.5743, 0.4260], device='cuda:6'), in_proj_covar=tensor([0.0231, 0.0241, 0.0254, 0.0254, 0.0245, 0.0222, 0.0273, 0.0227], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:19:48,471 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42562.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:19:51,969 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:20:08,910 INFO [finetune.py:976] (6/7) Epoch 8, batch 2500, loss[loss=0.2897, simple_loss=0.3542, pruned_loss=0.1126, over 4855.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2623, pruned_loss=0.06891, over 951097.31 frames. ], batch size: 44, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:20:16,191 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=42604.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:20:20,740 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=42610.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:20:33,719 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.709e+02 1.979e+02 2.316e+02 5.134e+02, threshold=3.959e+02, percent-clipped=4.0 2023-03-26 09:20:42,883 INFO [finetune.py:976] (6/7) Epoch 8, batch 2550, loss[loss=0.2081, simple_loss=0.2723, pruned_loss=0.07194, over 4904.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2671, pruned_loss=0.07055, over 950153.61 frames. ], batch size: 43, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:20:43,000 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3073, 2.7648, 2.1653, 1.7602, 2.5798, 2.7359, 2.5804, 2.3862], device='cuda:6'), covar=tensor([0.0665, 0.0523, 0.0861, 0.0925, 0.0683, 0.0663, 0.0622, 0.0844], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0133, 0.0144, 0.0126, 0.0114, 0.0144, 0.0144, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:20:46,522 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:20:53,416 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=42652.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 09:21:02,960 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7328, 0.9402, 1.6738, 1.5769, 1.4048, 1.3877, 1.4485, 1.4942], device='cuda:6'), covar=tensor([0.4307, 0.5043, 0.4090, 0.4491, 0.5399, 0.4196, 0.5649, 0.3952], device='cuda:6'), in_proj_covar=tensor([0.0233, 0.0242, 0.0255, 0.0255, 0.0246, 0.0223, 0.0274, 0.0228], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:21:25,355 INFO [finetune.py:976] (6/7) Epoch 8, batch 2600, loss[loss=0.2101, simple_loss=0.2847, pruned_loss=0.06775, over 4901.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2698, pruned_loss=0.07161, over 949706.81 frames. ], batch size: 43, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:21:32,028 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5273, 1.3700, 1.9455, 2.8996, 2.0004, 2.2968, 0.8495, 2.3217], device='cuda:6'), covar=tensor([0.1905, 0.1654, 0.1255, 0.0692, 0.0940, 0.1364, 0.2017, 0.0719], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0102, 0.0139, 0.0127, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 09:21:36,648 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42710.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:21:42,118 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-26 09:21:47,687 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 09:21:49,605 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.153e+02 1.771e+02 2.168e+02 2.787e+02 4.495e+02, threshold=4.337e+02, percent-clipped=4.0 2023-03-26 09:21:53,824 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42735.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:21:56,894 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6627, 1.4823, 1.2817, 1.0993, 1.7077, 1.4324, 2.0097, 1.6025], device='cuda:6'), covar=tensor([0.1519, 0.2348, 0.3799, 0.2951, 0.2908, 0.1789, 0.2382, 0.2126], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0188, 0.0233, 0.0254, 0.0235, 0.0193, 0.0211, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:21:59,152 INFO [finetune.py:976] (6/7) Epoch 8, batch 2650, loss[loss=0.2359, simple_loss=0.2698, pruned_loss=0.101, over 3992.00 frames. ], tot_loss[loss=0.2079, simple_loss=0.2716, pruned_loss=0.0721, over 951563.06 frames. ], batch size: 17, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:22:22,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9999, 1.7167, 1.5350, 1.5730, 1.6751, 1.6469, 1.6973, 2.4084], device='cuda:6'), covar=tensor([0.4831, 0.6024, 0.3966, 0.5293, 0.4803, 0.3040, 0.4793, 0.2028], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0259, 0.0221, 0.0281, 0.0241, 0.0206, 0.0245, 0.0207], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:22:40,398 INFO [finetune.py:976] (6/7) Epoch 8, batch 2700, loss[loss=0.2042, simple_loss=0.2602, pruned_loss=0.07407, over 4752.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2702, pruned_loss=0.07104, over 952112.69 frames. ], batch size: 54, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:22:46,664 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2059, 1.7615, 1.9935, 1.9889, 1.7798, 1.7900, 1.9371, 1.8872], device='cuda:6'), covar=tensor([0.5065, 0.6455, 0.5191, 0.6306, 0.7323, 0.5578, 0.8027, 0.4825], device='cuda:6'), in_proj_covar=tensor([0.0233, 0.0242, 0.0255, 0.0255, 0.0246, 0.0223, 0.0275, 0.0228], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:23:27,702 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.604e+02 1.897e+02 2.218e+02 3.599e+02, threshold=3.793e+02, percent-clipped=0.0 2023-03-26 09:23:46,889 INFO [finetune.py:976] (6/7) Epoch 8, batch 2750, loss[loss=0.1896, simple_loss=0.2483, pruned_loss=0.06546, over 4853.00 frames. ], tot_loss[loss=0.2048, simple_loss=0.268, pruned_loss=0.07078, over 954019.19 frames. ], batch size: 47, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:23:56,430 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42859.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:02,860 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=42862.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:14,750 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=42879.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:21,126 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 09:24:31,686 INFO [finetune.py:976] (6/7) Epoch 8, batch 2800, loss[loss=0.1945, simple_loss=0.2543, pruned_loss=0.06739, over 4815.00 frames. ], tot_loss[loss=0.2016, simple_loss=0.2641, pruned_loss=0.06952, over 951691.46 frames. ], batch size: 39, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:24:32,393 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1928, 2.2385, 2.0052, 1.6002, 2.4179, 2.5548, 2.3056, 1.9538], device='cuda:6'), covar=tensor([0.0302, 0.0286, 0.0472, 0.0375, 0.0293, 0.0500, 0.0305, 0.0379], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0112, 0.0141, 0.0117, 0.0105, 0.0102, 0.0091, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.0938e-05, 8.8115e-05, 1.1269e-04, 9.1780e-05, 8.2541e-05, 7.5710e-05, 6.9094e-05, 8.5678e-05], device='cuda:6') 2023-03-26 09:24:48,544 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42920.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:24:54,867 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.137e+02 1.632e+02 1.941e+02 2.379e+02 3.960e+02, threshold=3.882e+02, percent-clipped=2.0 2023-03-26 09:25:02,607 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=42940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:25:04,922 INFO [finetune.py:976] (6/7) Epoch 8, batch 2850, loss[loss=0.1555, simple_loss=0.2105, pruned_loss=0.05025, over 3939.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2625, pruned_loss=0.06919, over 952616.79 frames. ], batch size: 17, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:25:38,296 INFO [finetune.py:976] (6/7) Epoch 8, batch 2900, loss[loss=0.235, simple_loss=0.3091, pruned_loss=0.08046, over 4813.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2663, pruned_loss=0.07094, over 950875.70 frames. ], batch size: 45, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:25:41,565 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 09:25:45,630 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43005.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:26:03,398 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.926e+01 1.731e+02 1.970e+02 2.373e+02 5.777e+02, threshold=3.941e+02, percent-clipped=2.0 2023-03-26 09:26:13,022 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43035.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:26:18,883 INFO [finetune.py:976] (6/7) Epoch 8, batch 2950, loss[loss=0.2298, simple_loss=0.2839, pruned_loss=0.0879, over 4839.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.2689, pruned_loss=0.07171, over 951731.09 frames. ], batch size: 49, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:26:44,528 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43083.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:26:45,462 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 09:26:52,570 INFO [finetune.py:976] (6/7) Epoch 8, batch 3000, loss[loss=0.2258, simple_loss=0.2717, pruned_loss=0.08995, over 3920.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2686, pruned_loss=0.07122, over 949229.70 frames. ], batch size: 17, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:26:52,570 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 09:26:59,071 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4503, 1.2650, 1.3587, 1.3871, 1.6269, 1.5420, 1.4020, 1.2725], device='cuda:6'), covar=tensor([0.0343, 0.0332, 0.0548, 0.0281, 0.0284, 0.0458, 0.0342, 0.0394], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0112, 0.0141, 0.0117, 0.0105, 0.0102, 0.0092, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.1410e-05, 8.8069e-05, 1.1304e-04, 9.1744e-05, 8.2637e-05, 7.6043e-05, 6.9346e-05, 8.5777e-05], device='cuda:6') 2023-03-26 09:27:00,809 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7841, 1.6237, 1.6081, 1.6398, 1.1139, 3.0545, 1.1392, 1.7748], device='cuda:6'), covar=tensor([0.3185, 0.2214, 0.1927, 0.2214, 0.1817, 0.0263, 0.2500, 0.1211], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0114, 0.0118, 0.0122, 0.0116, 0.0098, 0.0100, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:27:10,878 INFO [finetune.py:1010] (6/7) Epoch 8, validation: loss=0.16, simple_loss=0.2311, pruned_loss=0.04446, over 2265189.00 frames. 2023-03-26 09:27:10,879 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 09:27:49,850 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.686e+02 2.049e+02 2.426e+02 3.920e+02, threshold=4.099e+02, percent-clipped=0.0 2023-03-26 09:28:00,453 INFO [finetune.py:976] (6/7) Epoch 8, batch 3050, loss[loss=0.2238, simple_loss=0.2804, pruned_loss=0.08357, over 4690.00 frames. ], tot_loss[loss=0.2058, simple_loss=0.2691, pruned_loss=0.07122, over 950906.15 frames. ], batch size: 23, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:28:13,461 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43162.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:28:18,979 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5441, 1.4361, 1.4538, 1.5639, 1.1340, 3.1859, 1.1985, 1.7236], device='cuda:6'), covar=tensor([0.3214, 0.2326, 0.2050, 0.2248, 0.1856, 0.0202, 0.2729, 0.1288], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0116, 0.0098, 0.0100, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:28:36,037 INFO [finetune.py:976] (6/7) Epoch 8, batch 3100, loss[loss=0.1799, simple_loss=0.2379, pruned_loss=0.06097, over 4735.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2679, pruned_loss=0.07111, over 951359.09 frames. ], batch size: 23, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:28:52,833 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43210.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:29:01,133 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43215.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:29:14,670 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.165e+02 1.652e+02 1.930e+02 2.337e+02 4.149e+02, threshold=3.860e+02, percent-clipped=1.0 2023-03-26 09:29:23,820 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43235.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:29:34,858 INFO [finetune.py:976] (6/7) Epoch 8, batch 3150, loss[loss=0.2051, simple_loss=0.2698, pruned_loss=0.0702, over 4868.00 frames. ], tot_loss[loss=0.2024, simple_loss=0.2649, pruned_loss=0.06992, over 950112.92 frames. ], batch size: 34, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:30:24,888 INFO [finetune.py:976] (6/7) Epoch 8, batch 3200, loss[loss=0.1839, simple_loss=0.2543, pruned_loss=0.05677, over 4789.00 frames. ], tot_loss[loss=0.1991, simple_loss=0.2615, pruned_loss=0.06838, over 949563.96 frames. ], batch size: 29, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:30:33,144 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43305.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:30:48,356 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6444, 3.7922, 3.5440, 1.6438, 3.7465, 2.7963, 0.9634, 2.6732], device='cuda:6'), covar=tensor([0.2544, 0.1738, 0.1469, 0.3371, 0.0983, 0.1001, 0.4157, 0.1381], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0171, 0.0159, 0.0129, 0.0155, 0.0122, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 09:30:49,477 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.668e+02 2.087e+02 2.541e+02 1.424e+03, threshold=4.174e+02, percent-clipped=3.0 2023-03-26 09:31:03,656 INFO [finetune.py:976] (6/7) Epoch 8, batch 3250, loss[loss=0.2314, simple_loss=0.289, pruned_loss=0.08691, over 4745.00 frames. ], tot_loss[loss=0.2027, simple_loss=0.2641, pruned_loss=0.07068, over 948543.56 frames. ], batch size: 54, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:31:15,389 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43353.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:31:58,487 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 09:31:59,780 INFO [finetune.py:976] (6/7) Epoch 8, batch 3300, loss[loss=0.1636, simple_loss=0.2391, pruned_loss=0.04407, over 4891.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2659, pruned_loss=0.07032, over 948053.39 frames. ], batch size: 32, lr: 3.83e-03, grad_scale: 16.0 2023-03-26 09:32:33,735 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 09:32:45,625 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.691e+02 2.029e+02 2.462e+02 4.055e+02, threshold=4.059e+02, percent-clipped=0.0 2023-03-26 09:33:04,670 INFO [finetune.py:976] (6/7) Epoch 8, batch 3350, loss[loss=0.2409, simple_loss=0.3102, pruned_loss=0.08574, over 4810.00 frames. ], tot_loss[loss=0.2053, simple_loss=0.2685, pruned_loss=0.07112, over 949741.04 frames. ], batch size: 45, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:33:05,573 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.87 vs. limit=5.0 2023-03-26 09:33:51,121 INFO [finetune.py:976] (6/7) Epoch 8, batch 3400, loss[loss=0.219, simple_loss=0.2782, pruned_loss=0.07992, over 4136.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.269, pruned_loss=0.07112, over 951704.75 frames. ], batch size: 18, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:33:57,894 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8167, 1.1403, 0.9897, 1.5986, 2.0442, 1.5073, 1.4812, 1.6822], device='cuda:6'), covar=tensor([0.1370, 0.2152, 0.1939, 0.1180, 0.1982, 0.2023, 0.1454, 0.1826], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0113, 0.0092, 0.0123, 0.0095, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 09:34:04,513 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43514.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:05,101 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:15,392 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.653e+02 1.866e+02 2.233e+02 4.638e+02, threshold=3.733e+02, percent-clipped=1.0 2023-03-26 09:34:19,628 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=43535.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:24,526 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8232, 1.5998, 2.1898, 1.4123, 1.9798, 2.1073, 1.6069, 2.2441], device='cuda:6'), covar=tensor([0.1332, 0.2000, 0.1368, 0.2133, 0.0877, 0.1439, 0.2712, 0.0857], device='cuda:6'), in_proj_covar=tensor([0.0200, 0.0203, 0.0196, 0.0195, 0.0180, 0.0219, 0.0218, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:34:25,008 INFO [finetune.py:976] (6/7) Epoch 8, batch 3450, loss[loss=0.2257, simple_loss=0.2745, pruned_loss=0.08846, over 4823.00 frames. ], tot_loss[loss=0.205, simple_loss=0.2687, pruned_loss=0.07067, over 951814.79 frames. ], batch size: 38, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:34:43,317 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:34:56,388 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43575.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:35:02,198 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=43583.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:35:06,534 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0167, 4.8661, 4.7167, 2.8173, 4.9355, 3.8348, 0.9855, 3.6692], device='cuda:6'), covar=tensor([0.2363, 0.1451, 0.1329, 0.2938, 0.0728, 0.0886, 0.4893, 0.1291], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0173, 0.0161, 0.0130, 0.0156, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 09:35:08,906 INFO [finetune.py:976] (6/7) Epoch 8, batch 3500, loss[loss=0.2521, simple_loss=0.3024, pruned_loss=0.1009, over 4801.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2659, pruned_loss=0.06952, over 951302.44 frames. ], batch size: 39, lr: 3.83e-03, grad_scale: 32.0 2023-03-26 09:35:34,551 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.655e+02 2.028e+02 2.395e+02 4.370e+02, threshold=4.057e+02, percent-clipped=4.0 2023-03-26 09:35:35,152 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1757, 1.2818, 1.2512, 1.3945, 1.3386, 2.5056, 1.1990, 1.4541], device='cuda:6'), covar=tensor([0.0938, 0.1825, 0.1002, 0.0930, 0.1667, 0.0349, 0.1543, 0.1685], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0076, 0.0079, 0.0093, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:35:44,699 INFO [finetune.py:976] (6/7) Epoch 8, batch 3550, loss[loss=0.154, simple_loss=0.2234, pruned_loss=0.04229, over 4731.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2626, pruned_loss=0.0684, over 953769.80 frames. ], batch size: 23, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:36:03,535 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.6560, 3.1807, 2.8826, 1.5898, 3.0563, 2.5847, 2.5026, 2.8198], device='cuda:6'), covar=tensor([0.0739, 0.0783, 0.1442, 0.2169, 0.1423, 0.1825, 0.1735, 0.0922], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0201, 0.0202, 0.0188, 0.0218, 0.0208, 0.0224, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:36:16,973 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4721, 1.5059, 1.5872, 0.9277, 1.5127, 1.8221, 1.7828, 1.3323], device='cuda:6'), covar=tensor([0.1024, 0.0695, 0.0463, 0.0599, 0.0511, 0.0512, 0.0366, 0.0743], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0157, 0.0120, 0.0138, 0.0134, 0.0125, 0.0146, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.6417e-05, 1.1548e-04, 8.6949e-05, 9.9970e-05, 9.5848e-05, 9.1929e-05, 1.0733e-04, 1.0946e-04], device='cuda:6') 2023-03-26 09:36:18,003 INFO [finetune.py:976] (6/7) Epoch 8, batch 3600, loss[loss=0.2187, simple_loss=0.272, pruned_loss=0.08276, over 4804.00 frames. ], tot_loss[loss=0.1981, simple_loss=0.2606, pruned_loss=0.06784, over 954369.20 frames. ], batch size: 51, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:36:40,279 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.156e+02 1.641e+02 1.836e+02 2.142e+02 3.900e+02, threshold=3.673e+02, percent-clipped=0.0 2023-03-26 09:37:03,251 INFO [finetune.py:976] (6/7) Epoch 8, batch 3650, loss[loss=0.2069, simple_loss=0.2769, pruned_loss=0.06843, over 4921.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2634, pruned_loss=0.06928, over 955604.28 frames. ], batch size: 42, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:37:42,271 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43787.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:37:53,053 INFO [finetune.py:976] (6/7) Epoch 8, batch 3700, loss[loss=0.2307, simple_loss=0.2926, pruned_loss=0.08443, over 4882.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.2675, pruned_loss=0.07093, over 954407.50 frames. ], batch size: 32, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:37:53,667 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3649, 1.2946, 1.3131, 1.5836, 1.3757, 2.8547, 1.1964, 1.4024], device='cuda:6'), covar=tensor([0.1029, 0.1894, 0.1361, 0.0995, 0.1705, 0.0293, 0.1596, 0.1806], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0079, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:38:37,142 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.066e+01 1.637e+02 2.055e+02 2.501e+02 4.825e+02, threshold=4.110e+02, percent-clipped=4.0 2023-03-26 09:38:56,861 INFO [finetune.py:976] (6/7) Epoch 8, batch 3750, loss[loss=0.1884, simple_loss=0.2637, pruned_loss=0.05658, over 4877.00 frames. ], tot_loss[loss=0.2068, simple_loss=0.2699, pruned_loss=0.07184, over 953285.03 frames. ], batch size: 35, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:39:00,430 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:39:13,808 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=43870.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:39:30,696 INFO [finetune.py:976] (6/7) Epoch 8, batch 3800, loss[loss=0.1783, simple_loss=0.2455, pruned_loss=0.05554, over 4772.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.2697, pruned_loss=0.07075, over 953519.73 frames. ], batch size: 26, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:39:33,167 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5932, 1.4820, 1.4867, 1.5621, 1.1772, 3.1519, 1.2742, 1.8783], device='cuda:6'), covar=tensor([0.3351, 0.2355, 0.2023, 0.2355, 0.1783, 0.0225, 0.2818, 0.1220], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0116, 0.0120, 0.0123, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:39:37,886 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3084, 2.0598, 2.1941, 1.0142, 2.4316, 2.5708, 2.2170, 1.9648], device='cuda:6'), covar=tensor([0.0875, 0.0658, 0.0517, 0.0710, 0.0455, 0.0471, 0.0475, 0.0722], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0157, 0.0121, 0.0137, 0.0133, 0.0126, 0.0146, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.6037e-05, 1.1522e-04, 8.7164e-05, 9.9684e-05, 9.5253e-05, 9.2145e-05, 1.0743e-04, 1.0913e-04], device='cuda:6') 2023-03-26 09:39:40,878 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:40:01,524 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.597e+02 1.980e+02 2.453e+02 5.062e+02, threshold=3.959e+02, percent-clipped=3.0 2023-03-26 09:40:16,149 INFO [finetune.py:976] (6/7) Epoch 8, batch 3850, loss[loss=0.1837, simple_loss=0.2529, pruned_loss=0.0573, over 4796.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2681, pruned_loss=0.06979, over 954041.20 frames. ], batch size: 29, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:40:25,550 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7332, 1.5862, 1.5872, 1.6780, 1.2903, 3.7083, 1.4277, 2.0498], device='cuda:6'), covar=tensor([0.3412, 0.2406, 0.2095, 0.2257, 0.1698, 0.0168, 0.2669, 0.1259], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:40:26,740 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6286, 1.4790, 1.4912, 1.5071, 0.9294, 2.9817, 1.0852, 1.5676], device='cuda:6'), covar=tensor([0.3411, 0.2371, 0.2148, 0.2430, 0.2088, 0.0243, 0.2818, 0.1400], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0117, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:40:26,757 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=43959.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:40:33,723 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=43970.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:40:54,207 INFO [finetune.py:976] (6/7) Epoch 8, batch 3900, loss[loss=0.2469, simple_loss=0.2977, pruned_loss=0.09807, over 4818.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2656, pruned_loss=0.0692, over 955319.64 frames. ], batch size: 39, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:41:17,081 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:41:22,370 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.530e+02 1.807e+02 2.225e+02 3.627e+02, threshold=3.614e+02, percent-clipped=0.0 2023-03-26 09:41:32,526 INFO [finetune.py:976] (6/7) Epoch 8, batch 3950, loss[loss=0.1946, simple_loss=0.2583, pruned_loss=0.06542, over 4820.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2619, pruned_loss=0.06773, over 957328.13 frames. ], batch size: 41, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:41:37,855 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1092, 2.5950, 2.4707, 1.3344, 2.6048, 2.1417, 1.9796, 2.2312], device='cuda:6'), covar=tensor([0.1130, 0.0953, 0.1741, 0.2420, 0.2077, 0.2305, 0.2326, 0.1442], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0200, 0.0201, 0.0187, 0.0217, 0.0206, 0.0222, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:41:59,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5336, 2.2156, 1.7877, 0.8750, 1.9497, 1.9525, 1.7079, 1.9435], device='cuda:6'), covar=tensor([0.0711, 0.0820, 0.1307, 0.1968, 0.1391, 0.1900, 0.2087, 0.0996], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0199, 0.0200, 0.0187, 0.0216, 0.0206, 0.0221, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:42:06,489 INFO [finetune.py:976] (6/7) Epoch 8, batch 4000, loss[loss=0.1781, simple_loss=0.255, pruned_loss=0.05058, over 4834.00 frames. ], tot_loss[loss=0.2005, simple_loss=0.2632, pruned_loss=0.06889, over 955070.49 frames. ], batch size: 33, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:42:21,687 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4497, 1.4202, 1.5996, 1.6864, 1.5290, 3.1795, 1.2793, 1.5715], device='cuda:6'), covar=tensor([0.0974, 0.1838, 0.1205, 0.1010, 0.1695, 0.0277, 0.1567, 0.1751], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0079, 0.0093, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:42:37,513 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.793e+02 2.145e+02 2.588e+02 4.712e+02, threshold=4.291e+02, percent-clipped=10.0 2023-03-26 09:42:56,496 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44143.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:42:57,049 INFO [finetune.py:976] (6/7) Epoch 8, batch 4050, loss[loss=0.2458, simple_loss=0.3144, pruned_loss=0.0886, over 4816.00 frames. ], tot_loss[loss=0.2061, simple_loss=0.269, pruned_loss=0.07165, over 953474.03 frames. ], batch size: 38, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:43:31,065 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44170.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:43:49,918 INFO [finetune.py:976] (6/7) Epoch 8, batch 4100, loss[loss=0.196, simple_loss=0.2649, pruned_loss=0.06351, over 4900.00 frames. ], tot_loss[loss=0.2067, simple_loss=0.2704, pruned_loss=0.07153, over 953216.69 frames. ], batch size: 37, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:44:02,161 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5315, 1.4762, 1.4234, 1.4920, 1.1856, 2.9405, 1.0878, 1.5546], device='cuda:6'), covar=tensor([0.3186, 0.2383, 0.2085, 0.2308, 0.1656, 0.0257, 0.2684, 0.1327], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0122, 0.0116, 0.0097, 0.0100, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:44:06,176 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6411, 1.2040, 0.7299, 1.5468, 1.9676, 1.2241, 1.4086, 1.5821], device='cuda:6'), covar=tensor([0.1587, 0.2259, 0.2195, 0.1276, 0.2192, 0.2261, 0.1576, 0.2081], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 09:44:15,791 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44218.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:44:22,432 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.086e+02 1.696e+02 1.895e+02 2.360e+02 4.949e+02, threshold=3.791e+02, percent-clipped=1.0 2023-03-26 09:44:31,521 INFO [finetune.py:976] (6/7) Epoch 8, batch 4150, loss[loss=0.2074, simple_loss=0.268, pruned_loss=0.07343, over 4812.00 frames. ], tot_loss[loss=0.2084, simple_loss=0.2719, pruned_loss=0.07249, over 952434.22 frames. ], batch size: 33, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:44:46,754 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:44:59,090 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-26 09:45:04,740 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-26 09:45:07,273 INFO [finetune.py:976] (6/7) Epoch 8, batch 4200, loss[loss=0.2001, simple_loss=0.2649, pruned_loss=0.06761, over 4855.00 frames. ], tot_loss[loss=0.2078, simple_loss=0.2717, pruned_loss=0.07193, over 953587.67 frames. ], batch size: 49, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:45:30,600 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44315.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:45:40,452 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.242e+02 1.746e+02 1.975e+02 2.337e+02 5.106e+02, threshold=3.951e+02, percent-clipped=1.0 2023-03-26 09:45:54,906 INFO [finetune.py:976] (6/7) Epoch 8, batch 4250, loss[loss=0.2319, simple_loss=0.2898, pruned_loss=0.08699, over 4822.00 frames. ], tot_loss[loss=0.2048, simple_loss=0.2683, pruned_loss=0.07066, over 953648.58 frames. ], batch size: 41, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:46:23,515 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7446, 1.5906, 1.4156, 1.2543, 1.8043, 1.4963, 1.8475, 1.7294], device='cuda:6'), covar=tensor([0.1538, 0.2557, 0.3495, 0.2829, 0.2692, 0.1846, 0.3057, 0.2005], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0188, 0.0233, 0.0254, 0.0235, 0.0194, 0.0211, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:46:27,773 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4803, 2.3209, 2.8395, 1.8990, 2.5881, 2.7607, 2.2502, 3.0270], device='cuda:6'), covar=tensor([0.1521, 0.2081, 0.1498, 0.2302, 0.1009, 0.1450, 0.2488, 0.0834], device='cuda:6'), in_proj_covar=tensor([0.0200, 0.0204, 0.0196, 0.0194, 0.0180, 0.0219, 0.0219, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:46:32,566 INFO [finetune.py:976] (6/7) Epoch 8, batch 4300, loss[loss=0.1947, simple_loss=0.2648, pruned_loss=0.06236, over 4869.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2637, pruned_loss=0.06893, over 953740.23 frames. ], batch size: 31, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:46:56,827 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.550e+02 1.851e+02 2.365e+02 4.860e+02, threshold=3.701e+02, percent-clipped=2.0 2023-03-26 09:47:05,831 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44443.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:47:06,355 INFO [finetune.py:976] (6/7) Epoch 8, batch 4350, loss[loss=0.1852, simple_loss=0.2534, pruned_loss=0.05855, over 4826.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2614, pruned_loss=0.06815, over 953585.50 frames. ], batch size: 33, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:47:37,687 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44491.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:47:39,433 INFO [finetune.py:976] (6/7) Epoch 8, batch 4400, loss[loss=0.2628, simple_loss=0.3139, pruned_loss=0.1058, over 4173.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2641, pruned_loss=0.06989, over 954678.26 frames. ], batch size: 65, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:48:17,539 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=44527.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:48:18,618 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.261e+02 1.746e+02 1.995e+02 2.515e+02 6.158e+02, threshold=3.991e+02, percent-clipped=2.0 2023-03-26 09:48:28,721 INFO [finetune.py:976] (6/7) Epoch 8, batch 4450, loss[loss=0.2213, simple_loss=0.2866, pruned_loss=0.07798, over 4918.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.2685, pruned_loss=0.07167, over 952585.44 frames. ], batch size: 36, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:48:37,011 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=44550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:48:51,540 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:19,337 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:28,022 INFO [finetune.py:976] (6/7) Epoch 8, batch 4500, loss[loss=0.2708, simple_loss=0.3075, pruned_loss=0.1171, over 4808.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2684, pruned_loss=0.07124, over 952413.87 frames. ], batch size: 39, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:49:38,832 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.68 vs. limit=5.0 2023-03-26 09:49:47,688 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=44611.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:48,883 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:50,668 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=44615.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:49:51,336 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2243, 2.1023, 1.6552, 2.1937, 2.1635, 1.8072, 2.5341, 2.1593], device='cuda:6'), covar=tensor([0.1497, 0.2657, 0.3629, 0.3069, 0.2868, 0.1929, 0.3531, 0.2113], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0188, 0.0232, 0.0253, 0.0235, 0.0194, 0.0210, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:50:00,000 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.305e+02 1.720e+02 2.026e+02 2.465e+02 5.780e+02, threshold=4.053e+02, percent-clipped=2.0 2023-03-26 09:50:10,542 INFO [finetune.py:976] (6/7) Epoch 8, batch 4550, loss[loss=0.205, simple_loss=0.2709, pruned_loss=0.06954, over 4867.00 frames. ], tot_loss[loss=0.2065, simple_loss=0.2697, pruned_loss=0.07161, over 952640.89 frames. ], batch size: 31, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:50:17,246 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1867, 1.2319, 0.6656, 1.9592, 2.4028, 1.7098, 1.7440, 1.9094], device='cuda:6'), covar=tensor([0.1462, 0.2279, 0.2440, 0.1226, 0.1883, 0.2054, 0.1434, 0.2115], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0122, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 09:50:27,423 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 09:50:27,737 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=44663.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:50:38,170 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0348, 2.0340, 1.8469, 2.2236, 2.5388, 2.1239, 1.8469, 1.6308], device='cuda:6'), covar=tensor([0.2377, 0.2146, 0.1890, 0.1597, 0.2106, 0.1190, 0.2519, 0.1937], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0209, 0.0205, 0.0187, 0.0240, 0.0178, 0.0214, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:50:52,807 INFO [finetune.py:976] (6/7) Epoch 8, batch 4600, loss[loss=0.2695, simple_loss=0.3243, pruned_loss=0.1074, over 4775.00 frames. ], tot_loss[loss=0.2057, simple_loss=0.2691, pruned_loss=0.07112, over 953010.38 frames. ], batch size: 51, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:51:15,459 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.778e+01 1.566e+02 1.839e+02 2.114e+02 3.234e+02, threshold=3.678e+02, percent-clipped=0.0 2023-03-26 09:51:25,982 INFO [finetune.py:976] (6/7) Epoch 8, batch 4650, loss[loss=0.2387, simple_loss=0.2802, pruned_loss=0.09863, over 4587.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.2664, pruned_loss=0.07014, over 953468.78 frames. ], batch size: 20, lr: 3.82e-03, grad_scale: 32.0 2023-03-26 09:51:46,837 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3717, 2.0947, 2.6212, 1.7839, 2.3512, 2.3753, 1.9156, 2.7557], device='cuda:6'), covar=tensor([0.1439, 0.2243, 0.1869, 0.2387, 0.1092, 0.1958, 0.2943, 0.0858], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0204, 0.0195, 0.0193, 0.0179, 0.0219, 0.0219, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:51:59,451 INFO [finetune.py:976] (6/7) Epoch 8, batch 4700, loss[loss=0.1891, simple_loss=0.2574, pruned_loss=0.06038, over 4865.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2629, pruned_loss=0.06823, over 955681.01 frames. ], batch size: 31, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:52:22,746 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.499e+02 1.877e+02 2.319e+02 4.193e+02, threshold=3.754e+02, percent-clipped=1.0 2023-03-26 09:52:32,228 INFO [finetune.py:976] (6/7) Epoch 8, batch 4750, loss[loss=0.2318, simple_loss=0.2934, pruned_loss=0.08511, over 4826.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2617, pruned_loss=0.06835, over 955804.23 frames. ], batch size: 39, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:52:58,205 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44883.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:53:03,097 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9436, 1.6013, 2.2195, 1.5775, 1.9238, 2.0585, 1.5634, 2.3043], device='cuda:6'), covar=tensor([0.1399, 0.2086, 0.1413, 0.2029, 0.1048, 0.1739, 0.2858, 0.0925], device='cuda:6'), in_proj_covar=tensor([0.0200, 0.0204, 0.0196, 0.0194, 0.0180, 0.0220, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:53:03,676 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5694, 3.0647, 2.5795, 2.0304, 2.8515, 2.9797, 2.8756, 2.5204], device='cuda:6'), covar=tensor([0.0622, 0.0498, 0.0780, 0.0869, 0.0453, 0.0731, 0.0629, 0.0933], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0132, 0.0144, 0.0125, 0.0115, 0.0144, 0.0144, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:53:03,698 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2020, 2.2297, 2.3484, 1.0618, 2.5468, 2.8224, 2.3794, 2.1186], device='cuda:6'), covar=tensor([0.0929, 0.0687, 0.0565, 0.0759, 0.0798, 0.0629, 0.0467, 0.0638], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0158, 0.0123, 0.0138, 0.0134, 0.0126, 0.0148, 0.0150], device='cuda:6'), out_proj_covar=tensor([9.7721e-05, 1.1645e-04, 8.8765e-05, 1.0023e-04, 9.6119e-05, 9.2701e-05, 1.0848e-04, 1.1047e-04], device='cuda:6') 2023-03-26 09:53:05,273 INFO [finetune.py:976] (6/7) Epoch 8, batch 4800, loss[loss=0.2192, simple_loss=0.2839, pruned_loss=0.07723, over 4812.00 frames. ], tot_loss[loss=0.2016, simple_loss=0.2639, pruned_loss=0.06961, over 955625.77 frames. ], batch size: 51, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:53:08,338 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5170, 1.3565, 1.4318, 1.4690, 0.9894, 2.9580, 1.0965, 1.6193], device='cuda:6'), covar=tensor([0.3532, 0.2567, 0.2203, 0.2464, 0.2063, 0.0241, 0.2948, 0.1353], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0116, 0.0098, 0.0101, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:53:15,870 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=44906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:53:36,564 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5122, 1.3568, 1.3834, 1.4960, 0.8974, 2.9295, 1.0787, 1.5904], device='cuda:6'), covar=tensor([0.3609, 0.2669, 0.2292, 0.2440, 0.2216, 0.0266, 0.2870, 0.1441], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0116, 0.0098, 0.0100, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:53:41,288 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.589e+02 1.932e+02 2.383e+02 4.430e+02, threshold=3.864e+02, percent-clipped=2.0 2023-03-26 09:53:52,484 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 09:53:55,412 INFO [finetune.py:976] (6/7) Epoch 8, batch 4850, loss[loss=0.2396, simple_loss=0.3064, pruned_loss=0.0864, over 4911.00 frames. ], tot_loss[loss=0.2032, simple_loss=0.2663, pruned_loss=0.07007, over 955199.14 frames. ], batch size: 38, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:54:32,983 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 09:54:57,805 INFO [finetune.py:976] (6/7) Epoch 8, batch 4900, loss[loss=0.2069, simple_loss=0.2883, pruned_loss=0.06276, over 4803.00 frames. ], tot_loss[loss=0.2039, simple_loss=0.2674, pruned_loss=0.07024, over 954972.25 frames. ], batch size: 40, lr: 3.82e-03, grad_scale: 16.0 2023-03-26 09:55:25,592 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.694e+02 2.008e+02 2.325e+02 4.035e+02, threshold=4.016e+02, percent-clipped=1.0 2023-03-26 09:55:25,709 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1300, 2.0937, 2.1069, 1.5150, 2.2982, 2.2527, 2.2127, 1.7963], device='cuda:6'), covar=tensor([0.0606, 0.0676, 0.0749, 0.0891, 0.0524, 0.0736, 0.0677, 0.1185], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0132, 0.0144, 0.0125, 0.0115, 0.0144, 0.0144, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:55:44,370 INFO [finetune.py:976] (6/7) Epoch 8, batch 4950, loss[loss=0.2592, simple_loss=0.3239, pruned_loss=0.09723, over 4175.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.2696, pruned_loss=0.07112, over 953180.06 frames. ], batch size: 65, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:55:57,033 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:56:09,957 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 09:56:13,224 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45081.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:56:21,541 INFO [finetune.py:976] (6/7) Epoch 8, batch 5000, loss[loss=0.2178, simple_loss=0.2717, pruned_loss=0.08194, over 4903.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.2692, pruned_loss=0.07098, over 953449.55 frames. ], batch size: 36, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:56:25,902 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4541, 2.5255, 2.5729, 2.0487, 2.6079, 2.6131, 2.6582, 2.1853], device='cuda:6'), covar=tensor([0.0608, 0.0582, 0.0660, 0.0747, 0.0595, 0.0648, 0.0605, 0.0983], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0132, 0.0145, 0.0125, 0.0116, 0.0144, 0.0144, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:56:37,081 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45117.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 09:56:45,214 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.621e+02 1.919e+02 2.483e+02 3.797e+02, threshold=3.837e+02, percent-clipped=0.0 2023-03-26 09:56:50,822 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6251, 1.4439, 2.0609, 1.9510, 1.7708, 3.9730, 1.4324, 1.9698], device='cuda:6'), covar=tensor([0.0898, 0.1831, 0.1141, 0.0960, 0.1540, 0.0195, 0.1480, 0.1626], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 09:56:52,665 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45142.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:56:53,773 INFO [finetune.py:976] (6/7) Epoch 8, batch 5050, loss[loss=0.1736, simple_loss=0.2394, pruned_loss=0.05394, over 4921.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2665, pruned_loss=0.07029, over 955305.19 frames. ], batch size: 33, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:57:19,962 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45183.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:57:21,387 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.26 vs. limit=5.0 2023-03-26 09:57:26,575 INFO [finetune.py:976] (6/7) Epoch 8, batch 5100, loss[loss=0.1401, simple_loss=0.2191, pruned_loss=0.03053, over 4817.00 frames. ], tot_loss[loss=0.2, simple_loss=0.2628, pruned_loss=0.06862, over 956070.96 frames. ], batch size: 25, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:57:35,008 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:57:38,670 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0280, 1.6799, 2.3367, 1.6079, 1.9688, 2.1700, 1.6718, 2.3451], device='cuda:6'), covar=tensor([0.1328, 0.2159, 0.1432, 0.2108, 0.0986, 0.1580, 0.2833, 0.0832], device='cuda:6'), in_proj_covar=tensor([0.0201, 0.0205, 0.0196, 0.0195, 0.0181, 0.0220, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 09:57:55,114 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.630e+02 1.903e+02 2.262e+02 3.588e+02, threshold=3.806e+02, percent-clipped=0.0 2023-03-26 09:57:55,794 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45231.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:58:04,089 INFO [finetune.py:976] (6/7) Epoch 8, batch 5150, loss[loss=0.2285, simple_loss=0.286, pruned_loss=0.08545, over 4837.00 frames. ], tot_loss[loss=0.2002, simple_loss=0.2627, pruned_loss=0.06885, over 954696.63 frames. ], batch size: 33, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:58:05,549 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-26 09:58:11,269 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45254.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 09:58:40,359 INFO [finetune.py:976] (6/7) Epoch 8, batch 5200, loss[loss=0.1584, simple_loss=0.2296, pruned_loss=0.04355, over 4767.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2664, pruned_loss=0.0697, over 955173.93 frames. ], batch size: 26, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:59:09,675 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.659e+02 1.911e+02 2.326e+02 4.760e+02, threshold=3.822e+02, percent-clipped=1.0 2023-03-26 09:59:18,774 INFO [finetune.py:976] (6/7) Epoch 8, batch 5250, loss[loss=0.2064, simple_loss=0.2934, pruned_loss=0.05964, over 4896.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2681, pruned_loss=0.07025, over 953231.43 frames. ], batch size: 43, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 09:59:27,672 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45348.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 09:59:30,552 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 09:59:57,318 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4917, 2.1951, 1.7511, 0.7841, 1.8779, 1.9773, 1.7357, 1.9091], device='cuda:6'), covar=tensor([0.0868, 0.0837, 0.1579, 0.2005, 0.1471, 0.2060, 0.2354, 0.0988], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0203, 0.0203, 0.0190, 0.0219, 0.0208, 0.0224, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:00:03,243 INFO [finetune.py:976] (6/7) Epoch 8, batch 5300, loss[loss=0.2759, simple_loss=0.3293, pruned_loss=0.1112, over 4816.00 frames. ], tot_loss[loss=0.2056, simple_loss=0.2699, pruned_loss=0.07063, over 955634.87 frames. ], batch size: 39, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:00:10,714 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6153, 3.1707, 3.1360, 1.5617, 3.3403, 2.5028, 0.9318, 2.3600], device='cuda:6'), covar=tensor([0.2135, 0.1615, 0.1710, 0.3563, 0.1132, 0.1069, 0.4282, 0.1580], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0173, 0.0160, 0.0129, 0.0156, 0.0122, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 10:00:16,220 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45409.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 10:00:16,528 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 10:00:18,474 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45412.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 10:00:30,691 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.820e+01 1.516e+02 1.858e+02 2.399e+02 4.469e+02, threshold=3.716e+02, percent-clipped=2.0 2023-03-26 10:00:39,970 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45437.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:00:48,320 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9904, 1.8168, 1.5555, 1.6729, 1.9573, 1.6421, 2.1769, 1.9090], device='cuda:6'), covar=tensor([0.1418, 0.2454, 0.3461, 0.2834, 0.2630, 0.1787, 0.3327, 0.2175], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0190, 0.0235, 0.0255, 0.0237, 0.0195, 0.0212, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:00:49,418 INFO [finetune.py:976] (6/7) Epoch 8, batch 5350, loss[loss=0.1851, simple_loss=0.2467, pruned_loss=0.06171, over 4918.00 frames. ], tot_loss[loss=0.206, simple_loss=0.2704, pruned_loss=0.07083, over 955795.43 frames. ], batch size: 43, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:01:54,087 INFO [finetune.py:976] (6/7) Epoch 8, batch 5400, loss[loss=0.1491, simple_loss=0.2121, pruned_loss=0.04304, over 4764.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.2664, pruned_loss=0.06935, over 955481.93 frames. ], batch size: 28, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:02:34,315 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.640e+02 2.019e+02 2.439e+02 4.086e+02, threshold=4.037e+02, percent-clipped=1.0 2023-03-26 10:02:41,720 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3024, 3.7144, 3.9102, 4.1039, 4.0804, 3.8682, 4.3613, 1.3710], device='cuda:6'), covar=tensor([0.0682, 0.0757, 0.0799, 0.0952, 0.1048, 0.1203, 0.0593, 0.5244], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0241, 0.0277, 0.0293, 0.0332, 0.0281, 0.0302, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:02:54,209 INFO [finetune.py:976] (6/7) Epoch 8, batch 5450, loss[loss=0.1693, simple_loss=0.2324, pruned_loss=0.05309, over 4722.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2644, pruned_loss=0.06893, over 956802.36 frames. ], batch size: 23, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:03:22,768 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 10:03:54,238 INFO [finetune.py:976] (6/7) Epoch 8, batch 5500, loss[loss=0.1813, simple_loss=0.2411, pruned_loss=0.06073, over 4751.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2619, pruned_loss=0.06841, over 956087.68 frames. ], batch size: 59, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:04:03,512 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8382, 1.1749, 0.8804, 1.7412, 2.1705, 1.5187, 1.5727, 1.6949], device='cuda:6'), covar=tensor([0.1626, 0.2398, 0.2250, 0.1249, 0.2025, 0.1946, 0.1604, 0.2145], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0097, 0.0115, 0.0092, 0.0124, 0.0096, 0.0101, 0.0093], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 10:04:18,375 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.550e+02 1.829e+02 2.210e+02 3.729e+02, threshold=3.658e+02, percent-clipped=0.0 2023-03-26 10:04:26,108 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9456, 1.8280, 1.5005, 1.6725, 1.9033, 1.5829, 2.2431, 1.9146], device='cuda:6'), covar=tensor([0.1479, 0.2459, 0.3507, 0.2891, 0.2732, 0.1857, 0.3458, 0.1997], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0190, 0.0236, 0.0257, 0.0238, 0.0196, 0.0213, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:04:28,410 INFO [finetune.py:976] (6/7) Epoch 8, batch 5550, loss[loss=0.1915, simple_loss=0.2677, pruned_loss=0.05764, over 4894.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2639, pruned_loss=0.06887, over 956544.21 frames. ], batch size: 43, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:04:32,077 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3742, 1.1927, 1.2045, 1.2002, 1.5925, 1.4568, 1.3540, 1.0739], device='cuda:6'), covar=tensor([0.0313, 0.0277, 0.0537, 0.0273, 0.0187, 0.0414, 0.0279, 0.0378], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0109, 0.0139, 0.0114, 0.0102, 0.0100, 0.0090, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.9379e-05, 8.5142e-05, 1.1081e-04, 8.9645e-05, 7.9761e-05, 7.4282e-05, 6.8431e-05, 8.3186e-05], device='cuda:6') 2023-03-26 10:04:45,729 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2126, 2.8482, 2.9813, 3.1523, 2.9547, 2.8173, 3.2740, 1.0556], device='cuda:6'), covar=tensor([0.1085, 0.0971, 0.1051, 0.1194, 0.1681, 0.1848, 0.1137, 0.5201], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0242, 0.0278, 0.0294, 0.0333, 0.0283, 0.0302, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:05:19,423 INFO [finetune.py:976] (6/7) Epoch 8, batch 5600, loss[loss=0.2073, simple_loss=0.273, pruned_loss=0.07079, over 4904.00 frames. ], tot_loss[loss=0.2046, simple_loss=0.268, pruned_loss=0.07058, over 955943.46 frames. ], batch size: 37, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:05:24,124 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45702.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:05:25,273 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=45704.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 10:05:29,949 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45712.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:05:40,352 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.219e+02 1.739e+02 1.993e+02 2.505e+02 5.014e+02, threshold=3.987e+02, percent-clipped=3.0 2023-03-26 10:05:44,491 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=45737.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:05:48,901 INFO [finetune.py:976] (6/7) Epoch 8, batch 5650, loss[loss=0.2166, simple_loss=0.2893, pruned_loss=0.07194, over 4821.00 frames. ], tot_loss[loss=0.2049, simple_loss=0.2697, pruned_loss=0.07007, over 956656.45 frames. ], batch size: 33, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:05:58,590 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:00,379 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:06,759 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45774.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:13,161 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=45785.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:06:18,813 INFO [finetune.py:976] (6/7) Epoch 8, batch 5700, loss[loss=0.1654, simple_loss=0.2176, pruned_loss=0.0566, over 4252.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2662, pruned_loss=0.06994, over 939469.00 frames. ], batch size: 18, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:06:33,876 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.20 vs. limit=5.0 2023-03-26 10:06:54,870 INFO [finetune.py:976] (6/7) Epoch 9, batch 0, loss[loss=0.1885, simple_loss=0.2508, pruned_loss=0.06314, over 4728.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2508, pruned_loss=0.06314, over 4728.00 frames. ], batch size: 59, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:06:54,870 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 10:07:00,295 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6877, 1.4899, 1.9851, 2.8918, 1.9784, 2.2982, 0.9654, 2.3063], device='cuda:6'), covar=tensor([0.1855, 0.1556, 0.1227, 0.0702, 0.0948, 0.1236, 0.1878, 0.0772], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0118, 0.0135, 0.0166, 0.0103, 0.0140, 0.0127, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:07:11,027 INFO [finetune.py:1010] (6/7) Epoch 9, validation: loss=0.1616, simple_loss=0.233, pruned_loss=0.04515, over 2265189.00 frames. 2023-03-26 10:07:11,028 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 10:07:11,212 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-26 10:07:17,602 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.132e+01 1.600e+02 1.914e+02 2.307e+02 4.538e+02, threshold=3.829e+02, percent-clipped=2.0 2023-03-26 10:07:22,766 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45835.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:07:42,402 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2454, 3.7386, 3.9341, 3.9409, 3.7498, 3.5028, 4.3510, 1.4305], device='cuda:6'), covar=tensor([0.1145, 0.1352, 0.1461, 0.1732, 0.1915, 0.2444, 0.1054, 0.7065], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0243, 0.0279, 0.0295, 0.0334, 0.0283, 0.0304, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:07:46,528 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=45859.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:07:55,655 INFO [finetune.py:976] (6/7) Epoch 9, batch 50, loss[loss=0.2092, simple_loss=0.2753, pruned_loss=0.07152, over 4902.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.2715, pruned_loss=0.07012, over 217366.65 frames. ], batch size: 35, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:08:23,309 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-26 10:08:35,699 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5690, 1.1744, 0.7264, 1.4307, 2.0220, 0.7103, 1.3197, 1.3854], device='cuda:6'), covar=tensor([0.1597, 0.2126, 0.1904, 0.1243, 0.2028, 0.2019, 0.1618, 0.2109], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:08:35,731 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=45920.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:08:36,815 INFO [finetune.py:976] (6/7) Epoch 9, batch 100, loss[loss=0.1778, simple_loss=0.2336, pruned_loss=0.06102, over 4763.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.265, pruned_loss=0.06871, over 383273.69 frames. ], batch size: 26, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:08:42,570 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.760e+02 2.008e+02 2.423e+02 3.807e+02, threshold=4.016e+02, percent-clipped=0.0 2023-03-26 10:09:10,374 INFO [finetune.py:976] (6/7) Epoch 9, batch 150, loss[loss=0.1648, simple_loss=0.2235, pruned_loss=0.05302, over 4802.00 frames. ], tot_loss[loss=0.196, simple_loss=0.2582, pruned_loss=0.06691, over 510232.92 frames. ], batch size: 51, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:09:32,059 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46004.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:09:49,163 INFO [finetune.py:976] (6/7) Epoch 9, batch 200, loss[loss=0.1961, simple_loss=0.2546, pruned_loss=0.06878, over 4916.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.2602, pruned_loss=0.06972, over 607091.77 frames. ], batch size: 36, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:09:58,659 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.086e+02 1.678e+02 2.056e+02 2.461e+02 4.455e+02, threshold=4.113e+02, percent-clipped=4.0 2023-03-26 10:10:22,925 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46052.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:10:26,567 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:10:36,487 INFO [finetune.py:976] (6/7) Epoch 9, batch 250, loss[loss=0.1849, simple_loss=0.2475, pruned_loss=0.06119, over 4805.00 frames. ], tot_loss[loss=0.2058, simple_loss=0.2673, pruned_loss=0.07218, over 685080.99 frames. ], batch size: 25, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:11:09,154 INFO [finetune.py:976] (6/7) Epoch 9, batch 300, loss[loss=0.169, simple_loss=0.2314, pruned_loss=0.05335, over 4071.00 frames. ], tot_loss[loss=0.2063, simple_loss=0.2692, pruned_loss=0.07168, over 744450.72 frames. ], batch size: 17, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:11:14,967 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.304e+02 1.737e+02 2.021e+02 2.354e+02 3.684e+02, threshold=4.042e+02, percent-clipped=0.0 2023-03-26 10:11:15,051 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:11:41,966 INFO [finetune.py:976] (6/7) Epoch 9, batch 350, loss[loss=0.2238, simple_loss=0.2805, pruned_loss=0.0835, over 4919.00 frames. ], tot_loss[loss=0.2081, simple_loss=0.271, pruned_loss=0.07261, over 792325.35 frames. ], batch size: 33, lr: 3.81e-03, grad_scale: 16.0 2023-03-26 10:12:11,371 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46215.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:12:17,562 INFO [finetune.py:976] (6/7) Epoch 9, batch 400, loss[loss=0.2193, simple_loss=0.2806, pruned_loss=0.07905, over 4290.00 frames. ], tot_loss[loss=0.2077, simple_loss=0.2707, pruned_loss=0.0723, over 826966.17 frames. ], batch size: 66, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:12:23,439 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.951e+01 1.676e+02 2.053e+02 2.418e+02 4.627e+02, threshold=4.106e+02, percent-clipped=2.0 2023-03-26 10:12:26,289 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.8646, 4.2055, 4.4149, 4.6578, 4.5775, 4.3596, 4.9466, 1.5498], device='cuda:6'), covar=tensor([0.0626, 0.0769, 0.0636, 0.0776, 0.1149, 0.1306, 0.0492, 0.5382], device='cuda:6'), in_proj_covar=tensor([0.0344, 0.0241, 0.0275, 0.0291, 0.0330, 0.0279, 0.0299, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:13:00,669 INFO [finetune.py:976] (6/7) Epoch 9, batch 450, loss[loss=0.2031, simple_loss=0.2706, pruned_loss=0.06783, over 4815.00 frames. ], tot_loss[loss=0.2043, simple_loss=0.2682, pruned_loss=0.07018, over 855270.73 frames. ], batch size: 39, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:13:03,240 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46276.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:13:27,998 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1548, 2.4295, 2.1653, 1.7140, 2.4235, 2.7190, 2.5835, 2.0326], device='cuda:6'), covar=tensor([0.0658, 0.0619, 0.0913, 0.0933, 0.0858, 0.0670, 0.0606, 0.1080], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0133, 0.0144, 0.0123, 0.0115, 0.0143, 0.0144, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:13:29,530 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 10:13:35,969 INFO [finetune.py:976] (6/7) Epoch 9, batch 500, loss[loss=0.1533, simple_loss=0.2306, pruned_loss=0.03803, over 4785.00 frames. ], tot_loss[loss=0.2016, simple_loss=0.2652, pruned_loss=0.069, over 877117.96 frames. ], batch size: 29, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:13:45,337 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.586e+02 1.900e+02 2.408e+02 3.619e+02, threshold=3.799e+02, percent-clipped=0.0 2023-03-26 10:13:54,717 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:13:55,977 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 10:14:08,394 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:14:09,077 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-26 10:14:15,591 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.4302, 4.6797, 4.9392, 5.2494, 5.0973, 4.7301, 5.5035, 1.6613], device='cuda:6'), covar=tensor([0.0673, 0.0789, 0.0749, 0.0833, 0.1221, 0.1574, 0.0516, 0.5775], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0243, 0.0276, 0.0292, 0.0332, 0.0281, 0.0301, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:14:17,263 INFO [finetune.py:976] (6/7) Epoch 9, batch 550, loss[loss=0.2344, simple_loss=0.2822, pruned_loss=0.09327, over 4910.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2613, pruned_loss=0.06717, over 895292.27 frames. ], batch size: 32, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:14:39,978 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46406.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:14:43,743 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0757, 1.0031, 1.0190, 0.4354, 0.7882, 1.1402, 1.2264, 0.9904], device='cuda:6'), covar=tensor([0.0870, 0.0536, 0.0502, 0.0533, 0.0494, 0.0555, 0.0366, 0.0588], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0156, 0.0121, 0.0136, 0.0132, 0.0125, 0.0146, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.6270e-05, 1.1497e-04, 8.7723e-05, 9.8639e-05, 9.4303e-05, 9.1793e-05, 1.0692e-04, 1.0814e-04], device='cuda:6') 2023-03-26 10:14:50,107 INFO [finetune.py:976] (6/7) Epoch 9, batch 600, loss[loss=0.2064, simple_loss=0.2703, pruned_loss=0.07128, over 4894.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2626, pruned_loss=0.0683, over 908025.55 frames. ], batch size: 43, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:14:54,847 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.702e+02 1.969e+02 2.390e+02 4.680e+02, threshold=3.938e+02, percent-clipped=3.0 2023-03-26 10:14:54,941 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46430.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:14:54,995 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2583, 2.1149, 1.7124, 2.0947, 2.1180, 1.7540, 2.5046, 2.2134], device='cuda:6'), covar=tensor([0.1368, 0.2423, 0.3487, 0.2952, 0.2713, 0.1873, 0.3176, 0.1901], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0187, 0.0231, 0.0252, 0.0235, 0.0193, 0.0210, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:14:58,577 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-26 10:15:08,092 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1515, 1.8572, 2.0323, 1.9938, 1.7726, 1.8371, 1.9949, 1.9761], device='cuda:6'), covar=tensor([0.4351, 0.5025, 0.3994, 0.5208, 0.6202, 0.4516, 0.6231, 0.3916], device='cuda:6'), in_proj_covar=tensor([0.0232, 0.0242, 0.0254, 0.0256, 0.0248, 0.0225, 0.0274, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:15:36,213 INFO [finetune.py:976] (6/7) Epoch 9, batch 650, loss[loss=0.178, simple_loss=0.2628, pruned_loss=0.0466, over 4900.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2656, pruned_loss=0.06906, over 915739.21 frames. ], batch size: 43, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:15:40,463 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46478.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:16:05,380 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:16:09,549 INFO [finetune.py:976] (6/7) Epoch 9, batch 700, loss[loss=0.1961, simple_loss=0.2731, pruned_loss=0.0596, over 4823.00 frames. ], tot_loss[loss=0.2051, simple_loss=0.2687, pruned_loss=0.07076, over 926252.48 frames. ], batch size: 49, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:16:14,888 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.666e+02 2.010e+02 2.529e+02 4.289e+02, threshold=4.019e+02, percent-clipped=2.0 2023-03-26 10:16:37,475 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:16:42,852 INFO [finetune.py:976] (6/7) Epoch 9, batch 750, loss[loss=0.1896, simple_loss=0.268, pruned_loss=0.05564, over 4924.00 frames. ], tot_loss[loss=0.2044, simple_loss=0.2683, pruned_loss=0.0702, over 931882.59 frames. ], batch size: 41, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:16:58,833 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:03,704 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46602.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:06,358 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 10:17:08,381 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7077, 1.5500, 1.4726, 1.4403, 1.9379, 1.8825, 1.6755, 1.3050], device='cuda:6'), covar=tensor([0.0290, 0.0359, 0.0562, 0.0367, 0.0210, 0.0464, 0.0320, 0.0482], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0110, 0.0139, 0.0115, 0.0102, 0.0101, 0.0091, 0.0108], device='cuda:6'), out_proj_covar=tensor([6.9690e-05, 8.5954e-05, 1.1093e-04, 9.0526e-05, 8.0286e-05, 7.4850e-05, 6.8688e-05, 8.3689e-05], device='cuda:6') 2023-03-26 10:17:15,988 INFO [finetune.py:976] (6/7) Epoch 9, batch 800, loss[loss=0.2409, simple_loss=0.2979, pruned_loss=0.09199, over 4744.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2676, pruned_loss=0.06996, over 936148.32 frames. ], batch size: 27, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:17:20,816 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.216e+01 1.561e+02 1.863e+02 2.153e+02 3.377e+02, threshold=3.726e+02, percent-clipped=0.0 2023-03-26 10:17:22,505 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46632.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:39,593 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46656.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:17:43,763 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46663.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:17:45,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9854, 1.8687, 1.4255, 1.7667, 1.8971, 1.6600, 2.5160, 1.9693], device='cuda:6'), covar=tensor([0.1397, 0.2135, 0.3491, 0.3015, 0.2773, 0.1675, 0.2872, 0.1955], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0189, 0.0234, 0.0255, 0.0238, 0.0195, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:17:49,095 INFO [finetune.py:976] (6/7) Epoch 9, batch 850, loss[loss=0.2553, simple_loss=0.3049, pruned_loss=0.1028, over 4827.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2672, pruned_loss=0.07063, over 941326.85 frames. ], batch size: 38, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:18:16,079 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5831, 2.2597, 1.8868, 0.9009, 2.0911, 2.0122, 1.7953, 2.0743], device='cuda:6'), covar=tensor([0.0739, 0.0916, 0.1457, 0.2087, 0.1287, 0.1983, 0.1988, 0.0946], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0201, 0.0201, 0.0187, 0.0217, 0.0206, 0.0223, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:18:34,826 INFO [finetune.py:976] (6/7) Epoch 9, batch 900, loss[loss=0.2142, simple_loss=0.2674, pruned_loss=0.08046, over 4733.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2638, pruned_loss=0.06962, over 944399.91 frames. ], batch size: 23, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:18:38,590 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1251, 2.2039, 2.2262, 1.5904, 2.3717, 2.3489, 2.3123, 2.0579], device='cuda:6'), covar=tensor([0.0584, 0.0602, 0.0713, 0.0837, 0.0606, 0.0661, 0.0598, 0.0970], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0132, 0.0144, 0.0123, 0.0116, 0.0143, 0.0144, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:18:39,663 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.365e+01 1.501e+02 1.768e+02 2.130e+02 3.855e+02, threshold=3.537e+02, percent-clipped=1.0 2023-03-26 10:19:08,421 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8639, 1.1305, 0.9081, 1.6613, 2.1333, 1.3162, 1.6284, 1.7343], device='cuda:6'), covar=tensor([0.1405, 0.2244, 0.1977, 0.1207, 0.1922, 0.2092, 0.1347, 0.1918], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0092, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:19:10,157 INFO [finetune.py:976] (6/7) Epoch 9, batch 950, loss[loss=0.1611, simple_loss=0.2267, pruned_loss=0.04769, over 4762.00 frames. ], tot_loss[loss=0.2002, simple_loss=0.2624, pruned_loss=0.06895, over 947630.94 frames. ], batch size: 27, lr: 3.80e-03, grad_scale: 32.0 2023-03-26 10:19:17,309 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.03 vs. limit=5.0 2023-03-26 10:19:36,036 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=46810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:19:37,374 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.79 vs. limit=5.0 2023-03-26 10:19:44,265 INFO [finetune.py:976] (6/7) Epoch 9, batch 1000, loss[loss=0.2362, simple_loss=0.3068, pruned_loss=0.0828, over 4819.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2651, pruned_loss=0.06974, over 950677.77 frames. ], batch size: 39, lr: 3.80e-03, grad_scale: 32.0 2023-03-26 10:19:49,061 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.690e+02 1.992e+02 2.479e+02 5.334e+02, threshold=3.983e+02, percent-clipped=2.0 2023-03-26 10:20:18,980 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4958, 1.5203, 1.5372, 0.8703, 1.5596, 1.8342, 1.7192, 1.3051], device='cuda:6'), covar=tensor([0.0885, 0.0559, 0.0462, 0.0542, 0.0411, 0.0454, 0.0340, 0.0666], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0156, 0.0120, 0.0134, 0.0131, 0.0125, 0.0145, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.5822e-05, 1.1483e-04, 8.6664e-05, 9.7688e-05, 9.4010e-05, 9.1360e-05, 1.0672e-04, 1.0800e-04], device='cuda:6') 2023-03-26 10:20:22,582 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=46871.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:20:23,064 INFO [finetune.py:976] (6/7) Epoch 9, batch 1050, loss[loss=0.2185, simple_loss=0.2806, pruned_loss=0.07822, over 4896.00 frames. ], tot_loss[loss=0.2039, simple_loss=0.2677, pruned_loss=0.07003, over 951610.52 frames. ], batch size: 43, lr: 3.80e-03, grad_scale: 32.0 2023-03-26 10:21:05,010 INFO [finetune.py:976] (6/7) Epoch 9, batch 1100, loss[loss=0.2326, simple_loss=0.2952, pruned_loss=0.08499, over 4885.00 frames. ], tot_loss[loss=0.2057, simple_loss=0.2697, pruned_loss=0.07081, over 952182.17 frames. ], batch size: 35, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:21:10,487 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.274e+02 1.654e+02 2.044e+02 2.534e+02 3.814e+02, threshold=4.089e+02, percent-clipped=0.0 2023-03-26 10:21:11,184 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=46932.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:21:16,211 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 10:21:23,042 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46951.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 10:21:28,258 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=46958.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:21:30,600 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2571, 2.8669, 2.9920, 3.2013, 3.0332, 2.8411, 3.3100, 0.9689], device='cuda:6'), covar=tensor([0.1060, 0.0885, 0.0979, 0.1063, 0.1575, 0.1757, 0.1107, 0.5057], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0242, 0.0276, 0.0293, 0.0331, 0.0281, 0.0300, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:21:37,704 INFO [finetune.py:976] (6/7) Epoch 9, batch 1150, loss[loss=0.2061, simple_loss=0.2487, pruned_loss=0.0818, over 3963.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2708, pruned_loss=0.07127, over 952349.04 frames. ], batch size: 17, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:21:42,583 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=46980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:22:02,619 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 10:22:08,474 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2615, 2.1461, 1.8348, 2.3104, 2.0995, 2.0075, 1.9955, 2.9892], device='cuda:6'), covar=tensor([0.4781, 0.6222, 0.4079, 0.5483, 0.5415, 0.2992, 0.5482, 0.1966], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0222, 0.0280, 0.0242, 0.0208, 0.0244, 0.0208], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:22:10,618 INFO [finetune.py:976] (6/7) Epoch 9, batch 1200, loss[loss=0.2064, simple_loss=0.2678, pruned_loss=0.07246, over 4780.00 frames. ], tot_loss[loss=0.2031, simple_loss=0.2676, pruned_loss=0.06935, over 953740.54 frames. ], batch size: 51, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:22:16,140 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.814e+01 1.625e+02 2.018e+02 2.406e+02 3.989e+02, threshold=4.036e+02, percent-clipped=0.0 2023-03-26 10:22:43,585 INFO [finetune.py:976] (6/7) Epoch 9, batch 1250, loss[loss=0.1559, simple_loss=0.2252, pruned_loss=0.0433, over 4862.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2634, pruned_loss=0.06761, over 955029.06 frames. ], batch size: 31, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:22:56,207 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8240, 1.1979, 0.8805, 1.7154, 2.1297, 1.5177, 1.6984, 1.5664], device='cuda:6'), covar=tensor([0.1658, 0.2428, 0.2301, 0.1307, 0.2098, 0.2107, 0.1562, 0.2376], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0092, 0.0122, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:22:58,671 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0487, 1.8256, 1.5951, 1.7534, 1.7859, 1.7018, 1.7103, 2.5752], device='cuda:6'), covar=tensor([0.5147, 0.5121, 0.4106, 0.4888, 0.4803, 0.2996, 0.4749, 0.1939], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0223, 0.0280, 0.0243, 0.0209, 0.0245, 0.0208], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:23:21,438 INFO [finetune.py:976] (6/7) Epoch 9, batch 1300, loss[loss=0.1845, simple_loss=0.2464, pruned_loss=0.06134, over 4824.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2609, pruned_loss=0.06708, over 955973.92 frames. ], batch size: 45, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:23:31,786 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.169e+01 1.658e+02 1.956e+02 2.404e+02 5.414e+02, threshold=3.912e+02, percent-clipped=2.0 2023-03-26 10:23:58,434 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:24:03,034 INFO [finetune.py:976] (6/7) Epoch 9, batch 1350, loss[loss=0.1729, simple_loss=0.2431, pruned_loss=0.05138, over 4865.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2629, pruned_loss=0.06815, over 957444.52 frames. ], batch size: 44, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:24:36,403 INFO [finetune.py:976] (6/7) Epoch 9, batch 1400, loss[loss=0.2211, simple_loss=0.2913, pruned_loss=0.07544, over 4931.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2668, pruned_loss=0.06951, over 955646.88 frames. ], batch size: 36, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:24:42,785 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.739e+02 2.002e+02 2.347e+02 4.228e+02, threshold=4.005e+02, percent-clipped=2.0 2023-03-26 10:24:43,503 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6770, 1.2399, 0.9105, 1.6252, 2.0119, 1.5088, 1.4816, 1.5580], device='cuda:6'), covar=tensor([0.1465, 0.2134, 0.2083, 0.1214, 0.1975, 0.2066, 0.1444, 0.2002], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0092, 0.0122, 0.0095, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:24:49,009 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:24:55,135 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:24:55,147 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6910, 1.4991, 2.0667, 1.2663, 1.9478, 1.9086, 1.5295, 2.0825], device='cuda:6'), covar=tensor([0.1337, 0.2020, 0.1536, 0.2129, 0.0851, 0.1420, 0.2690, 0.0899], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0204, 0.0196, 0.0193, 0.0180, 0.0217, 0.0217, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:24:55,232 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-26 10:24:59,421 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47258.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:09,253 INFO [finetune.py:976] (6/7) Epoch 9, batch 1450, loss[loss=0.2043, simple_loss=0.2663, pruned_loss=0.07118, over 4857.00 frames. ], tot_loss[loss=0.2039, simple_loss=0.2682, pruned_loss=0.06978, over 955178.99 frames. ], batch size: 31, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:25:27,570 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:29,420 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47302.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:36,208 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47306.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:25:46,838 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 10:25:53,869 INFO [finetune.py:976] (6/7) Epoch 9, batch 1500, loss[loss=0.1922, simple_loss=0.2656, pruned_loss=0.05941, over 4770.00 frames. ], tot_loss[loss=0.2045, simple_loss=0.2688, pruned_loss=0.07008, over 955084.86 frames. ], batch size: 28, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:26:00,804 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.648e+02 1.983e+02 2.305e+02 4.092e+02, threshold=3.967e+02, percent-clipped=1.0 2023-03-26 10:26:25,132 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6913, 1.5359, 1.6063, 1.6994, 1.1186, 3.8218, 1.4757, 2.0812], device='cuda:6'), covar=tensor([0.3347, 0.2488, 0.2069, 0.2341, 0.1902, 0.0136, 0.2509, 0.1255], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0116, 0.0098, 0.0100, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 10:26:26,879 INFO [finetune.py:976] (6/7) Epoch 9, batch 1550, loss[loss=0.1785, simple_loss=0.2368, pruned_loss=0.06013, over 4929.00 frames. ], tot_loss[loss=0.2027, simple_loss=0.2678, pruned_loss=0.06877, over 956045.22 frames. ], batch size: 33, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:26:33,205 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47379.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:26:50,625 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 10:26:51,166 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2051, 2.1880, 2.2993, 1.6337, 2.2819, 2.4097, 2.3754, 1.9052], device='cuda:6'), covar=tensor([0.0653, 0.0622, 0.0746, 0.0912, 0.0561, 0.0739, 0.0684, 0.1111], device='cuda:6'), in_proj_covar=tensor([0.0137, 0.0132, 0.0145, 0.0125, 0.0117, 0.0145, 0.0145, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:26:51,595 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-26 10:27:00,842 INFO [finetune.py:976] (6/7) Epoch 9, batch 1600, loss[loss=0.2213, simple_loss=0.2774, pruned_loss=0.08264, over 4769.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2649, pruned_loss=0.06771, over 958502.95 frames. ], batch size: 26, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:27:06,444 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 10:27:07,816 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.632e+02 1.991e+02 2.321e+02 5.028e+02, threshold=3.982e+02, percent-clipped=1.0 2023-03-26 10:27:14,926 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:27:15,530 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1855, 1.9333, 1.5861, 1.9321, 2.0401, 1.7908, 2.3540, 2.1154], device='cuda:6'), covar=tensor([0.1534, 0.2559, 0.3741, 0.3014, 0.2954, 0.1895, 0.3450, 0.2040], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0188, 0.0233, 0.0253, 0.0237, 0.0194, 0.0211, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:27:30,690 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:27:34,230 INFO [finetune.py:976] (6/7) Epoch 9, batch 1650, loss[loss=0.1657, simple_loss=0.243, pruned_loss=0.0442, over 4767.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2617, pruned_loss=0.06712, over 956359.38 frames. ], batch size: 26, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:27:36,481 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 10:28:02,972 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47514.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:28:07,772 INFO [finetune.py:976] (6/7) Epoch 9, batch 1700, loss[loss=0.2277, simple_loss=0.2824, pruned_loss=0.08647, over 4907.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2606, pruned_loss=0.06716, over 955486.70 frames. ], batch size: 36, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:28:13,223 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.617e+02 1.864e+02 2.209e+02 5.015e+02, threshold=3.728e+02, percent-clipped=2.0 2023-03-26 10:28:47,732 INFO [finetune.py:976] (6/7) Epoch 9, batch 1750, loss[loss=0.2083, simple_loss=0.2518, pruned_loss=0.08241, over 4013.00 frames. ], tot_loss[loss=0.2003, simple_loss=0.2632, pruned_loss=0.06875, over 953848.72 frames. ], batch size: 17, lr: 3.80e-03, grad_scale: 16.0 2023-03-26 10:29:13,718 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47597.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:29:29,378 INFO [finetune.py:976] (6/7) Epoch 9, batch 1800, loss[loss=0.1791, simple_loss=0.2459, pruned_loss=0.05614, over 4760.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2659, pruned_loss=0.06903, over 955578.88 frames. ], batch size: 28, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:29:34,862 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.172e+02 1.736e+02 1.994e+02 2.573e+02 6.193e+02, threshold=3.989e+02, percent-clipped=4.0 2023-03-26 10:30:02,647 INFO [finetune.py:976] (6/7) Epoch 9, batch 1850, loss[loss=0.2488, simple_loss=0.3011, pruned_loss=0.09832, over 4908.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2673, pruned_loss=0.06988, over 955295.27 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:30:11,327 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 10:30:35,896 INFO [finetune.py:976] (6/7) Epoch 9, batch 1900, loss[loss=0.1882, simple_loss=0.2462, pruned_loss=0.06507, over 4828.00 frames. ], tot_loss[loss=0.2031, simple_loss=0.2673, pruned_loss=0.06944, over 954034.28 frames. ], batch size: 33, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:30:42,018 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5425, 3.2542, 3.0802, 1.4754, 3.3561, 2.5180, 0.8533, 2.2751], device='cuda:6'), covar=tensor([0.2572, 0.1681, 0.1616, 0.3055, 0.1092, 0.1039, 0.3915, 0.1396], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0172, 0.0159, 0.0128, 0.0155, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 10:30:46,260 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.162e+02 1.608e+02 1.835e+02 2.236e+02 3.803e+02, threshold=3.670e+02, percent-clipped=0.0 2023-03-26 10:30:53,010 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=47735.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:30:54,869 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47738.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:31:21,759 INFO [finetune.py:976] (6/7) Epoch 9, batch 1950, loss[loss=0.1981, simple_loss=0.2591, pruned_loss=0.06857, over 4835.00 frames. ], tot_loss[loss=0.2024, simple_loss=0.2662, pruned_loss=0.0693, over 955011.96 frames. ], batch size: 47, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:31:31,515 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47788.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:31:39,618 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47799.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:31:47,237 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=47809.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:31:55,014 INFO [finetune.py:976] (6/7) Epoch 9, batch 2000, loss[loss=0.1994, simple_loss=0.26, pruned_loss=0.0694, over 4922.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2639, pruned_loss=0.06923, over 955084.95 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:32:00,456 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.527e+02 1.824e+02 2.186e+02 3.277e+02, threshold=3.648e+02, percent-clipped=0.0 2023-03-26 10:32:11,890 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:32:35,771 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=47870.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:32:36,815 INFO [finetune.py:976] (6/7) Epoch 9, batch 2050, loss[loss=0.2024, simple_loss=0.2489, pruned_loss=0.07794, over 3967.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2607, pruned_loss=0.06822, over 953871.37 frames. ], batch size: 17, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:32:57,836 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=47897.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:33:02,344 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 10:33:15,824 INFO [finetune.py:976] (6/7) Epoch 9, batch 2100, loss[loss=0.1615, simple_loss=0.2334, pruned_loss=0.04473, over 4932.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2609, pruned_loss=0.06845, over 955348.42 frames. ], batch size: 33, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:33:21,290 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.604e+02 1.917e+02 2.370e+02 5.169e+02, threshold=3.834e+02, percent-clipped=3.0 2023-03-26 10:33:29,791 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=47945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:33:34,087 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9083, 1.6809, 1.5711, 1.6632, 1.5914, 1.6117, 1.6530, 2.2943], device='cuda:6'), covar=tensor([0.4249, 0.4933, 0.3665, 0.4216, 0.4661, 0.2549, 0.4362, 0.1785], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0222, 0.0280, 0.0242, 0.0208, 0.0244, 0.0209], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:33:54,984 INFO [finetune.py:976] (6/7) Epoch 9, batch 2150, loss[loss=0.2353, simple_loss=0.3023, pruned_loss=0.0841, over 4816.00 frames. ], tot_loss[loss=0.2025, simple_loss=0.265, pruned_loss=0.07, over 953692.92 frames. ], batch size: 40, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:34:04,047 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5138, 1.4402, 1.9106, 1.8299, 1.5001, 3.3287, 1.2505, 1.6383], device='cuda:6'), covar=tensor([0.0958, 0.1852, 0.1175, 0.1029, 0.1712, 0.0277, 0.1660, 0.1778], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0083, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 10:34:37,731 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5982, 1.2132, 1.0082, 1.5691, 2.0049, 1.2683, 1.4387, 1.6358], device='cuda:6'), covar=tensor([0.1547, 0.2059, 0.1839, 0.1246, 0.2090, 0.2221, 0.1414, 0.1985], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0091, 0.0121, 0.0095, 0.0099, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:35:00,857 INFO [finetune.py:976] (6/7) Epoch 9, batch 2200, loss[loss=0.1998, simple_loss=0.2587, pruned_loss=0.0704, over 4794.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2679, pruned_loss=0.07026, over 955463.46 frames. ], batch size: 29, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:35:11,850 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.338e+02 1.754e+02 2.052e+02 2.528e+02 4.321e+02, threshold=4.105e+02, percent-clipped=1.0 2023-03-26 10:35:18,712 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48035.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:35:27,967 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 10:36:02,993 INFO [finetune.py:976] (6/7) Epoch 9, batch 2250, loss[loss=0.2324, simple_loss=0.2912, pruned_loss=0.08685, over 4832.00 frames. ], tot_loss[loss=0.2066, simple_loss=0.2702, pruned_loss=0.0715, over 954124.46 frames. ], batch size: 47, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:36:03,123 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48072.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:36:20,661 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48083.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:36:32,293 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48094.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:37:05,451 INFO [finetune.py:976] (6/7) Epoch 9, batch 2300, loss[loss=0.2072, simple_loss=0.2818, pruned_loss=0.06635, over 4897.00 frames. ], tot_loss[loss=0.2047, simple_loss=0.2695, pruned_loss=0.06996, over 956122.49 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:37:15,970 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.53 vs. limit=5.0 2023-03-26 10:37:16,654 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.508e+01 1.652e+02 1.874e+02 2.360e+02 5.580e+02, threshold=3.748e+02, percent-clipped=1.0 2023-03-26 10:37:23,217 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48133.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:37:34,219 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:37:54,848 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48165.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 10:38:01,014 INFO [finetune.py:976] (6/7) Epoch 9, batch 2350, loss[loss=0.1908, simple_loss=0.2546, pruned_loss=0.06352, over 4783.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2673, pruned_loss=0.06974, over 955948.90 frames. ], batch size: 29, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:38:12,405 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1497, 1.9763, 1.7870, 2.0331, 1.9557, 1.9745, 1.8687, 2.7752], device='cuda:6'), covar=tensor([0.4981, 0.6556, 0.4250, 0.5854, 0.5484, 0.2992, 0.5950, 0.2028], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0221, 0.0279, 0.0242, 0.0208, 0.0244, 0.0209], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:38:33,181 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4804, 1.0561, 0.7827, 1.3711, 1.9843, 0.7722, 1.2983, 1.4407], device='cuda:6'), covar=tensor([0.1522, 0.2202, 0.1889, 0.1282, 0.2033, 0.2180, 0.1497, 0.2127], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0122, 0.0095, 0.0099, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 10:38:35,604 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48212.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:38:43,034 INFO [finetune.py:976] (6/7) Epoch 9, batch 2400, loss[loss=0.2064, simple_loss=0.2633, pruned_loss=0.07477, over 4731.00 frames. ], tot_loss[loss=0.2002, simple_loss=0.2635, pruned_loss=0.06843, over 956517.36 frames. ], batch size: 23, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:38:49,467 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.159e+02 1.602e+02 2.015e+02 2.421e+02 3.465e+02, threshold=4.031e+02, percent-clipped=0.0 2023-03-26 10:39:01,018 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 10:39:18,902 INFO [finetune.py:976] (6/7) Epoch 9, batch 2450, loss[loss=0.1963, simple_loss=0.2547, pruned_loss=0.06899, over 4745.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2603, pruned_loss=0.06748, over 953482.12 frames. ], batch size: 23, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:39:19,462 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.71 vs. limit=2.0 2023-03-26 10:39:19,634 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48273.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 10:40:01,693 INFO [finetune.py:976] (6/7) Epoch 9, batch 2500, loss[loss=0.1842, simple_loss=0.2552, pruned_loss=0.05667, over 4756.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.263, pruned_loss=0.06817, over 954913.36 frames. ], batch size: 28, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:40:03,382 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48324.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:40:09,017 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.681e+02 1.962e+02 2.420e+02 5.026e+02, threshold=3.923e+02, percent-clipped=2.0 2023-03-26 10:40:35,357 INFO [finetune.py:976] (6/7) Epoch 9, batch 2550, loss[loss=0.1812, simple_loss=0.2518, pruned_loss=0.05533, over 4820.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2667, pruned_loss=0.06894, over 954227.62 frames. ], batch size: 40, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:40:45,889 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:40:51,848 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:40:53,632 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48397.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:08,895 INFO [finetune.py:976] (6/7) Epoch 9, batch 2600, loss[loss=0.2004, simple_loss=0.2776, pruned_loss=0.06161, over 4813.00 frames. ], tot_loss[loss=0.2021, simple_loss=0.2668, pruned_loss=0.06868, over 953674.46 frames. ], batch size: 40, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:41:09,604 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:12,600 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48428.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:15,194 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.166e+02 1.754e+02 2.041e+02 2.553e+02 4.015e+02, threshold=4.083e+02, percent-clipped=1.0 2023-03-26 10:41:24,234 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48442.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:25,447 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:34,011 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48458.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:41:38,229 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48465.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:41:42,380 INFO [finetune.py:976] (6/7) Epoch 9, batch 2650, loss[loss=0.2017, simple_loss=0.2663, pruned_loss=0.06856, over 4847.00 frames. ], tot_loss[loss=0.2034, simple_loss=0.2684, pruned_loss=0.06918, over 954597.60 frames. ], batch size: 44, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:41:56,019 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48484.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:42:06,172 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48492.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:42:19,479 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48513.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:42:24,896 INFO [finetune.py:976] (6/7) Epoch 9, batch 2700, loss[loss=0.2129, simple_loss=0.277, pruned_loss=0.07437, over 4901.00 frames. ], tot_loss[loss=0.2023, simple_loss=0.2671, pruned_loss=0.06869, over 954628.97 frames. ], batch size: 36, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:42:30,333 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.590e+02 1.873e+02 2.487e+02 4.580e+02, threshold=3.745e+02, percent-clipped=2.0 2023-03-26 10:43:07,962 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48568.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:43:10,318 INFO [finetune.py:976] (6/7) Epoch 9, batch 2750, loss[loss=0.1897, simple_loss=0.2435, pruned_loss=0.06795, over 4713.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.264, pruned_loss=0.06751, over 952179.95 frames. ], batch size: 23, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:43:15,550 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 10:43:45,669 INFO [finetune.py:976] (6/7) Epoch 9, batch 2800, loss[loss=0.1822, simple_loss=0.2431, pruned_loss=0.06069, over 4830.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2599, pruned_loss=0.06555, over 953316.17 frames. ], batch size: 30, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:43:51,109 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.018e+02 1.496e+02 1.797e+02 2.211e+02 4.995e+02, threshold=3.593e+02, percent-clipped=1.0 2023-03-26 10:44:19,116 INFO [finetune.py:976] (6/7) Epoch 9, batch 2850, loss[loss=0.1834, simple_loss=0.2479, pruned_loss=0.05948, over 4905.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2588, pruned_loss=0.06578, over 952517.76 frames. ], batch size: 32, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:44:24,042 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:44:45,949 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 10:45:04,573 INFO [finetune.py:976] (6/7) Epoch 9, batch 2900, loss[loss=0.2164, simple_loss=0.297, pruned_loss=0.06792, over 4901.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2627, pruned_loss=0.06729, over 953687.00 frames. ], batch size: 43, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:45:08,339 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48728.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:45:10,031 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.274e+02 1.660e+02 1.926e+02 2.355e+02 4.281e+02, threshold=3.853e+02, percent-clipped=2.0 2023-03-26 10:45:25,493 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48753.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:45:31,338 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3545, 1.9794, 2.2502, 2.2451, 1.9849, 1.9837, 2.2131, 2.0319], device='cuda:6'), covar=tensor([0.4385, 0.5198, 0.4381, 0.4845, 0.6459, 0.4467, 0.6100, 0.4396], device='cuda:6'), in_proj_covar=tensor([0.0232, 0.0239, 0.0253, 0.0255, 0.0249, 0.0224, 0.0273, 0.0228], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:45:38,484 INFO [finetune.py:976] (6/7) Epoch 9, batch 2950, loss[loss=0.2797, simple_loss=0.334, pruned_loss=0.1127, over 4147.00 frames. ], tot_loss[loss=0.2024, simple_loss=0.2666, pruned_loss=0.06908, over 952171.37 frames. ], batch size: 67, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:45:40,982 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:45:42,837 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=48779.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:46:11,301 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:46:11,795 INFO [finetune.py:976] (6/7) Epoch 9, batch 3000, loss[loss=0.1718, simple_loss=0.2469, pruned_loss=0.04838, over 4722.00 frames. ], tot_loss[loss=0.2036, simple_loss=0.268, pruned_loss=0.06961, over 953388.31 frames. ], batch size: 59, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:46:11,795 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 10:46:22,396 INFO [finetune.py:1010] (6/7) Epoch 9, validation: loss=0.159, simple_loss=0.2302, pruned_loss=0.04393, over 2265189.00 frames. 2023-03-26 10:46:22,396 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 10:46:27,909 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.631e+02 1.894e+02 2.277e+02 3.777e+02, threshold=3.789e+02, percent-clipped=0.0 2023-03-26 10:46:32,274 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=48838.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:46:52,193 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48868.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:46:54,488 INFO [finetune.py:976] (6/7) Epoch 9, batch 3050, loss[loss=0.1584, simple_loss=0.2313, pruned_loss=0.04268, over 4895.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2681, pruned_loss=0.06899, over 953399.97 frames. ], batch size: 32, lr: 3.79e-03, grad_scale: 16.0 2023-03-26 10:47:03,442 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48882.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:47:13,621 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=48899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:47:25,231 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=48916.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 10:47:25,844 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4134, 3.8863, 4.0587, 4.2473, 4.1610, 3.8881, 4.4495, 1.8234], device='cuda:6'), covar=tensor([0.0779, 0.0863, 0.0785, 0.0969, 0.1130, 0.1541, 0.0720, 0.4560], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0244, 0.0275, 0.0291, 0.0330, 0.0281, 0.0300, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:47:29,279 INFO [finetune.py:976] (6/7) Epoch 9, batch 3100, loss[loss=0.22, simple_loss=0.2776, pruned_loss=0.08125, over 4876.00 frames. ], tot_loss[loss=0.2026, simple_loss=0.2669, pruned_loss=0.06916, over 954822.52 frames. ], batch size: 31, lr: 3.79e-03, grad_scale: 32.0 2023-03-26 10:47:36,135 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.624e+02 1.916e+02 2.206e+02 4.881e+02, threshold=3.833e+02, percent-clipped=2.0 2023-03-26 10:48:07,639 INFO [finetune.py:976] (6/7) Epoch 9, batch 3150, loss[loss=0.1934, simple_loss=0.2542, pruned_loss=0.06628, over 4787.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2639, pruned_loss=0.06767, over 955986.91 frames. ], batch size: 26, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:48:16,701 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=48980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:48:51,013 INFO [finetune.py:976] (6/7) Epoch 9, batch 3200, loss[loss=0.1554, simple_loss=0.23, pruned_loss=0.04041, over 4768.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2607, pruned_loss=0.06608, over 958853.99 frames. ], batch size: 26, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:48:55,660 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49028.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:48:57,863 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.147e+02 1.665e+02 1.973e+02 2.326e+02 6.022e+02, threshold=3.945e+02, percent-clipped=4.0 2023-03-26 10:48:59,418 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 10:49:12,709 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49053.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:49:30,123 INFO [finetune.py:976] (6/7) Epoch 9, batch 3250, loss[loss=0.1829, simple_loss=0.263, pruned_loss=0.0514, over 4928.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2609, pruned_loss=0.06653, over 957248.63 frames. ], batch size: 38, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:49:40,155 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49079.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:50:05,722 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49101.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:50:21,991 INFO [finetune.py:976] (6/7) Epoch 9, batch 3300, loss[loss=0.2782, simple_loss=0.3263, pruned_loss=0.1151, over 4207.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2656, pruned_loss=0.06823, over 956873.51 frames. ], batch size: 65, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:50:26,598 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49127.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:50:28,942 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.707e+02 1.914e+02 2.346e+02 3.542e+02, threshold=3.827e+02, percent-clipped=0.0 2023-03-26 10:50:56,005 INFO [finetune.py:976] (6/7) Epoch 9, batch 3350, loss[loss=0.2018, simple_loss=0.2614, pruned_loss=0.07115, over 4762.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2667, pruned_loss=0.06864, over 956405.30 frames. ], batch size: 27, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:50:57,599 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 10:50:59,114 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49177.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:51:10,100 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-26 10:51:16,405 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9559, 1.0401, 1.8358, 1.7979, 1.6965, 1.5979, 1.6938, 1.7245], device='cuda:6'), covar=tensor([0.3765, 0.4763, 0.4049, 0.4301, 0.5315, 0.4018, 0.5216, 0.3923], device='cuda:6'), in_proj_covar=tensor([0.0233, 0.0240, 0.0253, 0.0256, 0.0248, 0.0225, 0.0273, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:51:17,576 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49194.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:51:30,255 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8320, 4.0718, 3.8073, 2.1190, 4.1725, 3.0401, 0.8255, 2.7342], device='cuda:6'), covar=tensor([0.2314, 0.1794, 0.1391, 0.2831, 0.0914, 0.1034, 0.4305, 0.1424], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0173, 0.0159, 0.0127, 0.0155, 0.0121, 0.0146, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 10:51:49,087 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6838, 1.5598, 1.5857, 1.5913, 1.0232, 3.0021, 1.2017, 1.7201], device='cuda:6'), covar=tensor([0.3267, 0.2572, 0.2069, 0.2358, 0.1989, 0.0253, 0.2738, 0.1359], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0116, 0.0098, 0.0100, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 10:51:50,172 INFO [finetune.py:976] (6/7) Epoch 9, batch 3400, loss[loss=0.1625, simple_loss=0.2382, pruned_loss=0.04336, over 4733.00 frames. ], tot_loss[loss=0.2022, simple_loss=0.2671, pruned_loss=0.06865, over 956027.83 frames. ], batch size: 59, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:51:59,827 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.353e+01 1.606e+02 1.878e+02 2.295e+02 4.525e+02, threshold=3.756e+02, percent-clipped=2.0 2023-03-26 10:52:10,049 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:52:30,714 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4922, 1.5638, 1.5191, 0.9004, 1.6253, 1.8153, 1.7635, 1.3981], device='cuda:6'), covar=tensor([0.0879, 0.0533, 0.0525, 0.0583, 0.0472, 0.0598, 0.0354, 0.0872], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0155, 0.0121, 0.0135, 0.0131, 0.0126, 0.0146, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.5472e-05, 1.1419e-04, 8.7722e-05, 9.7909e-05, 9.3836e-05, 9.2162e-05, 1.0706e-04, 1.0804e-04], device='cuda:6') 2023-03-26 10:52:31,279 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8108, 4.0863, 3.8164, 2.0583, 4.1770, 3.0741, 0.7946, 2.8165], device='cuda:6'), covar=tensor([0.2332, 0.1996, 0.1427, 0.3196, 0.0945, 0.1011, 0.4851, 0.1501], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0174, 0.0159, 0.0128, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 10:52:31,336 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9641, 1.8698, 2.3420, 1.5239, 2.0288, 2.3149, 1.7652, 2.5076], device='cuda:6'), covar=tensor([0.1462, 0.1945, 0.1549, 0.2275, 0.1121, 0.1718, 0.2581, 0.0974], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0205, 0.0194, 0.0193, 0.0181, 0.0219, 0.0218, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:52:41,767 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 10:52:55,266 INFO [finetune.py:976] (6/7) Epoch 9, batch 3450, loss[loss=0.2249, simple_loss=0.2652, pruned_loss=0.09225, over 4701.00 frames. ], tot_loss[loss=0.2017, simple_loss=0.2668, pruned_loss=0.06832, over 955450.46 frames. ], batch size: 23, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:52:56,674 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7959, 1.7047, 2.1195, 1.3619, 1.8269, 2.0311, 1.6118, 2.2358], device='cuda:6'), covar=tensor([0.1373, 0.2013, 0.1414, 0.2001, 0.0998, 0.1379, 0.2672, 0.0907], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0205, 0.0194, 0.0193, 0.0181, 0.0218, 0.0218, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:53:11,873 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-26 10:53:32,538 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:53:53,351 INFO [finetune.py:976] (6/7) Epoch 9, batch 3500, loss[loss=0.2165, simple_loss=0.2752, pruned_loss=0.0789, over 4847.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2635, pruned_loss=0.06722, over 954955.51 frames. ], batch size: 47, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:53:58,769 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.331e+01 1.641e+02 1.916e+02 2.289e+02 6.335e+02, threshold=3.833e+02, percent-clipped=2.0 2023-03-26 10:54:13,544 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 10:54:34,214 INFO [finetune.py:976] (6/7) Epoch 9, batch 3550, loss[loss=0.1829, simple_loss=0.2357, pruned_loss=0.06506, over 4822.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2613, pruned_loss=0.06641, over 957682.41 frames. ], batch size: 39, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:55:09,816 INFO [finetune.py:976] (6/7) Epoch 9, batch 3600, loss[loss=0.2114, simple_loss=0.2722, pruned_loss=0.07533, over 4903.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2599, pruned_loss=0.06645, over 958116.54 frames. ], batch size: 37, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:55:09,942 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:55:15,222 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.662e+02 2.002e+02 2.382e+02 4.044e+02, threshold=4.004e+02, percent-clipped=1.0 2023-03-26 10:55:15,947 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49432.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:55:29,933 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-26 10:55:43,178 INFO [finetune.py:976] (6/7) Epoch 9, batch 3650, loss[loss=0.1803, simple_loss=0.2653, pruned_loss=0.04764, over 4791.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2621, pruned_loss=0.06757, over 956229.20 frames. ], batch size: 29, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:55:46,389 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49477.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:55:50,107 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49483.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:55:56,194 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49493.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 10:55:56,773 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49494.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:56:17,056 INFO [finetune.py:976] (6/7) Epoch 9, batch 3700, loss[loss=0.2226, simple_loss=0.2788, pruned_loss=0.0832, over 4814.00 frames. ], tot_loss[loss=0.202, simple_loss=0.266, pruned_loss=0.06899, over 954394.01 frames. ], batch size: 30, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:56:18,956 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49525.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:56:22,522 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.782e+02 2.076e+02 2.384e+02 4.659e+02, threshold=4.152e+02, percent-clipped=5.0 2023-03-26 10:56:22,685 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8460, 1.3315, 1.7751, 1.7424, 1.5406, 1.5247, 1.6460, 1.6177], device='cuda:6'), covar=tensor([0.4283, 0.4966, 0.4185, 0.4593, 0.5592, 0.4312, 0.5689, 0.3893], device='cuda:6'), in_proj_covar=tensor([0.0234, 0.0241, 0.0255, 0.0256, 0.0250, 0.0225, 0.0274, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:56:29,218 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49542.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:56:50,572 INFO [finetune.py:976] (6/7) Epoch 9, batch 3750, loss[loss=0.2144, simple_loss=0.2686, pruned_loss=0.08009, over 4854.00 frames. ], tot_loss[loss=0.2055, simple_loss=0.2693, pruned_loss=0.07087, over 955851.00 frames. ], batch size: 31, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:57:02,705 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:57:28,149 INFO [finetune.py:976] (6/7) Epoch 9, batch 3800, loss[loss=0.206, simple_loss=0.2761, pruned_loss=0.06788, over 4820.00 frames. ], tot_loss[loss=0.2059, simple_loss=0.27, pruned_loss=0.07088, over 954353.15 frames. ], batch size: 33, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:57:38,668 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 10:57:39,045 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.635e+02 1.863e+02 2.259e+02 4.048e+02, threshold=3.725e+02, percent-clipped=0.0 2023-03-26 10:58:12,407 INFO [finetune.py:976] (6/7) Epoch 9, batch 3850, loss[loss=0.2349, simple_loss=0.2926, pruned_loss=0.0886, over 4204.00 frames. ], tot_loss[loss=0.2033, simple_loss=0.2675, pruned_loss=0.06956, over 954814.82 frames. ], batch size: 65, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:58:17,298 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5519, 2.2261, 1.7966, 0.8400, 1.9610, 1.9480, 1.7037, 1.9682], device='cuda:6'), covar=tensor([0.0811, 0.0870, 0.1532, 0.2098, 0.1420, 0.2361, 0.2191, 0.1035], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0199, 0.0200, 0.0187, 0.0215, 0.0206, 0.0222, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:58:38,598 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.42 vs. limit=5.0 2023-03-26 10:58:41,223 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2139, 2.1364, 1.7978, 2.2624, 2.1819, 1.8486, 2.5608, 2.2415], device='cuda:6'), covar=tensor([0.1195, 0.2396, 0.2772, 0.2384, 0.2145, 0.1483, 0.3065, 0.1552], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0189, 0.0233, 0.0254, 0.0238, 0.0195, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:58:48,064 INFO [finetune.py:976] (6/7) Epoch 9, batch 3900, loss[loss=0.2153, simple_loss=0.2625, pruned_loss=0.08408, over 4760.00 frames. ], tot_loss[loss=0.201, simple_loss=0.2649, pruned_loss=0.06853, over 955609.04 frames. ], batch size: 27, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:58:58,260 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.638e+02 1.913e+02 2.415e+02 4.821e+02, threshold=3.825e+02, percent-clipped=2.0 2023-03-26 10:59:15,377 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9302, 4.2961, 4.3192, 2.2390, 4.3735, 3.3494, 1.0811, 2.9892], device='cuda:6'), covar=tensor([0.2474, 0.1649, 0.1132, 0.3090, 0.0853, 0.0889, 0.4076, 0.1394], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0175, 0.0160, 0.0129, 0.0157, 0.0123, 0.0148, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 10:59:29,593 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.0486, 4.3831, 4.5998, 4.8221, 4.7633, 4.5517, 5.1353, 1.6252], device='cuda:6'), covar=tensor([0.0724, 0.0766, 0.0864, 0.0941, 0.1264, 0.1486, 0.0549, 0.5420], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0243, 0.0276, 0.0292, 0.0329, 0.0281, 0.0299, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 10:59:31,470 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=49766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:59:36,464 INFO [finetune.py:976] (6/7) Epoch 9, batch 3950, loss[loss=0.1891, simple_loss=0.2439, pruned_loss=0.06714, over 4880.00 frames. ], tot_loss[loss=0.1981, simple_loss=0.2614, pruned_loss=0.0674, over 953803.48 frames. ], batch size: 31, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 10:59:40,655 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49778.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 10:59:47,183 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=49788.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 11:00:02,055 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3010, 1.4424, 1.4761, 1.7040, 1.5515, 3.2025, 1.3413, 1.5486], device='cuda:6'), covar=tensor([0.1063, 0.1744, 0.1210, 0.1027, 0.1645, 0.0256, 0.1533, 0.1752], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0075, 0.0078, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0004, 0.0004], device='cuda:6') 2023-03-26 11:00:03,652 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 11:00:09,535 INFO [finetune.py:976] (6/7) Epoch 9, batch 4000, loss[loss=0.2388, simple_loss=0.3024, pruned_loss=0.08763, over 4764.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2613, pruned_loss=0.068, over 951658.21 frames. ], batch size: 54, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 11:00:13,722 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=49827.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:00:16,514 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.645e+02 2.017e+02 2.376e+02 6.319e+02, threshold=4.034e+02, percent-clipped=3.0 2023-03-26 11:00:17,259 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2600, 2.2213, 2.0503, 2.3509, 2.8273, 2.3572, 1.9943, 1.8304], device='cuda:6'), covar=tensor([0.2138, 0.2086, 0.1828, 0.1559, 0.1873, 0.1057, 0.2357, 0.1918], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0210, 0.0207, 0.0190, 0.0242, 0.0180, 0.0215, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:00:21,335 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9936, 4.0878, 3.9359, 2.2124, 4.2046, 3.2298, 1.0740, 2.9408], device='cuda:6'), covar=tensor([0.1912, 0.2002, 0.1282, 0.3077, 0.0978, 0.0979, 0.4361, 0.1556], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0176, 0.0161, 0.0130, 0.0158, 0.0124, 0.0148, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 11:00:40,712 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 11:00:42,823 INFO [finetune.py:976] (6/7) Epoch 9, batch 4050, loss[loss=0.2102, simple_loss=0.2814, pruned_loss=0.06956, over 4900.00 frames. ], tot_loss[loss=0.2012, simple_loss=0.2646, pruned_loss=0.06897, over 951824.39 frames. ], batch size: 35, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 11:00:56,963 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=49892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:01:16,000 INFO [finetune.py:976] (6/7) Epoch 9, batch 4100, loss[loss=0.1833, simple_loss=0.2311, pruned_loss=0.06774, over 4028.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2658, pruned_loss=0.06909, over 952387.67 frames. ], batch size: 17, lr: 3.78e-03, grad_scale: 32.0 2023-03-26 11:01:19,554 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5357, 2.7231, 2.4245, 1.9146, 2.3866, 2.8535, 2.5824, 2.3693], device='cuda:6'), covar=tensor([0.0560, 0.0559, 0.0721, 0.0842, 0.0642, 0.0595, 0.0666, 0.0851], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0132, 0.0143, 0.0123, 0.0117, 0.0143, 0.0143, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:01:22,949 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.254e+02 1.718e+02 2.083e+02 2.512e+02 3.689e+02, threshold=4.166e+02, percent-clipped=0.0 2023-03-26 11:01:28,941 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=49940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:01:48,770 INFO [finetune.py:976] (6/7) Epoch 9, batch 4150, loss[loss=0.2495, simple_loss=0.3106, pruned_loss=0.0942, over 4907.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2679, pruned_loss=0.07016, over 951725.32 frames. ], batch size: 37, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:02:23,457 INFO [finetune.py:976] (6/7) Epoch 9, batch 4200, loss[loss=0.2099, simple_loss=0.2664, pruned_loss=0.07674, over 4803.00 frames. ], tot_loss[loss=0.204, simple_loss=0.268, pruned_loss=0.07004, over 951739.32 frames. ], batch size: 41, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:02:31,377 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.147e+02 1.706e+02 2.002e+02 2.506e+02 6.230e+02, threshold=4.003e+02, percent-clipped=2.0 2023-03-26 11:02:35,551 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0211, 1.7094, 2.0659, 0.6502, 2.2077, 2.5527, 1.8678, 1.7979], device='cuda:6'), covar=tensor([0.1172, 0.1327, 0.0646, 0.0982, 0.0607, 0.0669, 0.0771, 0.0871], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0157, 0.0122, 0.0136, 0.0133, 0.0127, 0.0148, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.6949e-05, 1.1560e-04, 8.8293e-05, 9.8841e-05, 9.5275e-05, 9.3194e-05, 1.0840e-04, 1.0960e-04], device='cuda:6') 2023-03-26 11:02:57,218 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 11:03:01,009 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5317, 3.9123, 4.1278, 4.3646, 4.3077, 3.9956, 4.6368, 1.4549], device='cuda:6'), covar=tensor([0.0729, 0.0802, 0.0736, 0.0870, 0.1142, 0.1351, 0.0571, 0.5456], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0244, 0.0276, 0.0292, 0.0329, 0.0281, 0.0300, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:03:11,732 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7960, 1.6250, 1.4788, 1.3906, 1.8469, 1.5661, 1.8419, 1.7500], device='cuda:6'), covar=tensor([0.1487, 0.2410, 0.3348, 0.2746, 0.2789, 0.1859, 0.3171, 0.2005], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0188, 0.0233, 0.0253, 0.0237, 0.0194, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:03:15,795 INFO [finetune.py:976] (6/7) Epoch 9, batch 4250, loss[loss=0.2187, simple_loss=0.2747, pruned_loss=0.08133, over 4908.00 frames. ], tot_loss[loss=0.2024, simple_loss=0.2657, pruned_loss=0.06958, over 952543.51 frames. ], batch size: 36, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:03:24,160 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:03:32,826 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50088.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:03:53,926 INFO [finetune.py:976] (6/7) Epoch 9, batch 4300, loss[loss=0.1679, simple_loss=0.2263, pruned_loss=0.0548, over 4822.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2627, pruned_loss=0.06812, over 952902.02 frames. ], batch size: 40, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:03:53,999 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:03:56,367 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:03:57,639 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50128.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:04:00,427 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.608e+02 1.917e+02 2.228e+02 4.011e+02, threshold=3.835e+02, percent-clipped=1.0 2023-03-26 11:04:03,884 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50136.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:04:49,370 INFO [finetune.py:976] (6/7) Epoch 9, batch 4350, loss[loss=0.1546, simple_loss=0.2131, pruned_loss=0.04811, over 4754.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2582, pruned_loss=0.06626, over 954226.73 frames. ], batch size: 27, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:05:18,374 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50189.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:05:30,445 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50197.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:05:44,724 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.90 vs. limit=5.0 2023-03-26 11:05:54,670 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6316, 1.5062, 1.4951, 1.6410, 1.2358, 3.5790, 1.5418, 2.1502], device='cuda:6'), covar=tensor([0.3282, 0.2331, 0.2083, 0.2208, 0.1758, 0.0172, 0.2484, 0.1191], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0118, 0.0122, 0.0115, 0.0097, 0.0099, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 11:06:02,456 INFO [finetune.py:976] (6/7) Epoch 9, batch 4400, loss[loss=0.1924, simple_loss=0.2594, pruned_loss=0.06264, over 4863.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2599, pruned_loss=0.06737, over 954656.73 frames. ], batch size: 31, lr: 3.78e-03, grad_scale: 16.0 2023-03-26 11:06:13,197 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 11:06:14,078 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.827e+01 1.714e+02 1.989e+02 2.480e+02 5.028e+02, threshold=3.977e+02, percent-clipped=2.0 2023-03-26 11:06:26,784 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5082, 2.2064, 2.0233, 2.4675, 2.2951, 2.2615, 2.1906, 3.3504], device='cuda:6'), covar=tensor([0.4286, 0.6564, 0.3790, 0.5374, 0.4966, 0.2825, 0.5908, 0.1600], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0259, 0.0220, 0.0279, 0.0242, 0.0207, 0.0244, 0.0210], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:06:48,136 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50258.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:06:58,871 INFO [finetune.py:976] (6/7) Epoch 9, batch 4450, loss[loss=0.2195, simple_loss=0.2797, pruned_loss=0.07966, over 4760.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2626, pruned_loss=0.06824, over 955241.59 frames. ], batch size: 26, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:07:32,435 INFO [finetune.py:976] (6/7) Epoch 9, batch 4500, loss[loss=0.2171, simple_loss=0.2824, pruned_loss=0.0759, over 4870.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2658, pruned_loss=0.0691, over 957024.96 frames. ], batch size: 31, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:07:38,448 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.523e+02 1.896e+02 2.480e+02 4.445e+02, threshold=3.793e+02, percent-clipped=2.0 2023-03-26 11:08:05,989 INFO [finetune.py:976] (6/7) Epoch 9, batch 4550, loss[loss=0.1993, simple_loss=0.27, pruned_loss=0.06432, over 4836.00 frames. ], tot_loss[loss=0.2035, simple_loss=0.2674, pruned_loss=0.06983, over 956705.26 frames. ], batch size: 30, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:08:58,493 INFO [finetune.py:976] (6/7) Epoch 9, batch 4600, loss[loss=0.1773, simple_loss=0.2515, pruned_loss=0.05157, over 4822.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2681, pruned_loss=0.07011, over 955728.41 frames. ], batch size: 33, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:08:58,586 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:09:06,261 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.671e+02 1.983e+02 2.416e+02 3.848e+02, threshold=3.965e+02, percent-clipped=1.0 2023-03-26 11:09:23,056 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2321, 1.9068, 1.4643, 0.5525, 1.6292, 1.9530, 1.7380, 1.7800], device='cuda:6'), covar=tensor([0.0864, 0.0762, 0.1406, 0.1883, 0.1362, 0.1840, 0.2096, 0.0909], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0201, 0.0187, 0.0215, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:09:39,361 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:09:40,501 INFO [finetune.py:976] (6/7) Epoch 9, batch 4650, loss[loss=0.1406, simple_loss=0.2142, pruned_loss=0.03348, over 4782.00 frames. ], tot_loss[loss=0.2009, simple_loss=0.2647, pruned_loss=0.06849, over 955061.36 frames. ], batch size: 28, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:09:50,493 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50484.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:10:22,781 INFO [finetune.py:976] (6/7) Epoch 9, batch 4700, loss[loss=0.2026, simple_loss=0.2593, pruned_loss=0.07292, over 4737.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2611, pruned_loss=0.0671, over 955265.90 frames. ], batch size: 23, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:10:28,457 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 11:10:29,351 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.527e+02 1.850e+02 2.224e+02 3.838e+02, threshold=3.699e+02, percent-clipped=0.0 2023-03-26 11:10:38,053 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50546.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:10:42,740 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50553.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 11:10:56,120 INFO [finetune.py:976] (6/7) Epoch 9, batch 4750, loss[loss=0.1917, simple_loss=0.2608, pruned_loss=0.06134, over 4900.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.259, pruned_loss=0.06631, over 954873.88 frames. ], batch size: 35, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:11:18,591 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50607.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:11:29,495 INFO [finetune.py:976] (6/7) Epoch 9, batch 4800, loss[loss=0.1633, simple_loss=0.2314, pruned_loss=0.04759, over 4745.00 frames. ], tot_loss[loss=0.1991, simple_loss=0.2627, pruned_loss=0.0677, over 954463.89 frames. ], batch size: 23, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:11:36,104 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.695e+01 1.587e+02 1.907e+02 2.189e+02 3.978e+02, threshold=3.813e+02, percent-clipped=2.0 2023-03-26 11:11:55,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 11:12:03,073 INFO [finetune.py:976] (6/7) Epoch 9, batch 4850, loss[loss=0.1693, simple_loss=0.2428, pruned_loss=0.04792, over 4752.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2667, pruned_loss=0.06862, over 955473.28 frames. ], batch size: 27, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:12:23,756 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 11:12:36,220 INFO [finetune.py:976] (6/7) Epoch 9, batch 4900, loss[loss=0.206, simple_loss=0.2756, pruned_loss=0.06824, over 4757.00 frames. ], tot_loss[loss=0.2041, simple_loss=0.2686, pruned_loss=0.06976, over 954074.73 frames. ], batch size: 54, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:12:42,296 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.610e+02 1.915e+02 2.289e+02 4.400e+02, threshold=3.830e+02, percent-clipped=2.0 2023-03-26 11:13:08,688 INFO [finetune.py:976] (6/7) Epoch 9, batch 4950, loss[loss=0.1757, simple_loss=0.2497, pruned_loss=0.05089, over 4896.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2681, pruned_loss=0.06894, over 953975.51 frames. ], batch size: 36, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:13:15,371 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50782.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:13:16,532 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50784.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:13:23,019 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7194, 1.5712, 1.9569, 1.4799, 1.7867, 1.9233, 1.4962, 2.1036], device='cuda:6'), covar=tensor([0.0936, 0.1538, 0.1097, 0.1324, 0.0648, 0.0884, 0.2108, 0.0512], device='cuda:6'), in_proj_covar=tensor([0.0200, 0.0206, 0.0195, 0.0193, 0.0180, 0.0219, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:13:25,113 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 11:13:33,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50808.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:13:51,971 INFO [finetune.py:976] (6/7) Epoch 9, batch 5000, loss[loss=0.2302, simple_loss=0.274, pruned_loss=0.09317, over 4896.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.266, pruned_loss=0.06849, over 954031.40 frames. ], batch size: 35, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:14:03,063 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.212e+02 1.752e+02 2.044e+02 2.444e+02 6.074e+02, threshold=4.089e+02, percent-clipped=4.0 2023-03-26 11:14:03,136 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50832.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:14:13,400 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50843.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:14:24,233 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=50853.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:14:37,853 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=50869.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:14:40,059 INFO [finetune.py:976] (6/7) Epoch 9, batch 5050, loss[loss=0.1947, simple_loss=0.2649, pruned_loss=0.06228, over 4869.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2629, pruned_loss=0.06719, over 955353.77 frames. ], batch size: 31, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:15:00,246 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=50901.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:15:00,838 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=50902.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:15:21,255 INFO [finetune.py:976] (6/7) Epoch 9, batch 5100, loss[loss=0.227, simple_loss=0.2697, pruned_loss=0.09212, over 4073.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2592, pruned_loss=0.06565, over 956389.18 frames. ], batch size: 65, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:15:28,227 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3605, 1.1743, 1.2105, 1.2688, 1.5739, 1.4755, 1.3774, 1.1740], device='cuda:6'), covar=tensor([0.0299, 0.0271, 0.0506, 0.0282, 0.0213, 0.0381, 0.0276, 0.0337], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0108, 0.0138, 0.0114, 0.0101, 0.0100, 0.0091, 0.0107], device='cuda:6'), out_proj_covar=tensor([6.9768e-05, 8.4620e-05, 1.0986e-04, 8.9316e-05, 7.9342e-05, 7.4578e-05, 6.8530e-05, 8.2665e-05], device='cuda:6') 2023-03-26 11:15:29,782 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.571e+02 1.989e+02 2.366e+02 5.072e+02, threshold=3.977e+02, percent-clipped=2.0 2023-03-26 11:15:37,047 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9351, 1.3382, 1.9070, 1.8486, 1.6639, 1.6185, 1.7988, 1.7619], device='cuda:6'), covar=tensor([0.4440, 0.5107, 0.4079, 0.4804, 0.5597, 0.4346, 0.6032, 0.4039], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0242, 0.0254, 0.0257, 0.0252, 0.0227, 0.0275, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:15:38,837 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=50946.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:15:55,122 INFO [finetune.py:976] (6/7) Epoch 9, batch 5150, loss[loss=0.2405, simple_loss=0.2972, pruned_loss=0.09197, over 4905.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2598, pruned_loss=0.06649, over 955324.04 frames. ], batch size: 43, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:16:19,650 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51007.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:16:29,112 INFO [finetune.py:976] (6/7) Epoch 9, batch 5200, loss[loss=0.2165, simple_loss=0.3021, pruned_loss=0.06545, over 4935.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2635, pruned_loss=0.06768, over 953226.23 frames. ], batch size: 42, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:16:37,435 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.691e+02 2.095e+02 2.506e+02 4.401e+02, threshold=4.191e+02, percent-clipped=2.0 2023-03-26 11:17:06,731 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3142, 2.1089, 1.6917, 2.1521, 2.1878, 1.8413, 2.5016, 2.1347], device='cuda:6'), covar=tensor([0.1370, 0.2407, 0.3336, 0.2895, 0.2666, 0.1778, 0.3343, 0.1976], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0188, 0.0233, 0.0254, 0.0238, 0.0195, 0.0212, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:17:12,580 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51062.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:17:18,566 INFO [finetune.py:976] (6/7) Epoch 9, batch 5250, loss[loss=0.2378, simple_loss=0.2986, pruned_loss=0.08852, over 4786.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2658, pruned_loss=0.0684, over 953502.56 frames. ], batch size: 29, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:17:19,952 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6647, 1.4767, 1.3920, 1.7108, 1.9485, 1.7278, 1.1159, 1.4080], device='cuda:6'), covar=tensor([0.2149, 0.2142, 0.1947, 0.1628, 0.1612, 0.1226, 0.2704, 0.1886], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0209, 0.0206, 0.0189, 0.0241, 0.0180, 0.0214, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:17:41,763 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3079, 1.3268, 1.6206, 1.1250, 1.1681, 1.4829, 1.3123, 1.6068], device='cuda:6'), covar=tensor([0.1110, 0.1959, 0.1134, 0.1423, 0.0920, 0.1059, 0.2547, 0.0766], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0203, 0.0193, 0.0190, 0.0178, 0.0216, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:17:51,198 INFO [finetune.py:976] (6/7) Epoch 9, batch 5300, loss[loss=0.2138, simple_loss=0.2726, pruned_loss=0.07756, over 4116.00 frames. ], tot_loss[loss=0.2029, simple_loss=0.2677, pruned_loss=0.06904, over 954460.85 frames. ], batch size: 65, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:17:51,958 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51123.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:17:57,263 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.689e+02 2.023e+02 2.414e+02 5.734e+02, threshold=4.045e+02, percent-clipped=1.0 2023-03-26 11:18:01,858 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51138.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:18:18,280 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 11:18:19,580 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51164.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 11:18:24,358 INFO [finetune.py:976] (6/7) Epoch 9, batch 5350, loss[loss=0.1837, simple_loss=0.2593, pruned_loss=0.0541, over 4891.00 frames. ], tot_loss[loss=0.2038, simple_loss=0.2684, pruned_loss=0.06962, over 954245.52 frames. ], batch size: 32, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:18:24,449 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51172.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:18:55,667 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0430, 1.1462, 1.9434, 1.8789, 1.7730, 1.6800, 1.7824, 1.8374], device='cuda:6'), covar=tensor([0.3676, 0.4509, 0.4008, 0.3996, 0.5107, 0.3975, 0.5130, 0.3624], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0241, 0.0254, 0.0256, 0.0251, 0.0226, 0.0275, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:18:56,211 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51202.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:19:18,026 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1774, 2.0521, 2.6721, 1.6349, 2.3956, 2.3960, 1.9316, 2.6358], device='cuda:6'), covar=tensor([0.1289, 0.1645, 0.1428, 0.2191, 0.0729, 0.1386, 0.2361, 0.0774], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0203, 0.0193, 0.0190, 0.0178, 0.0216, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:19:18,526 INFO [finetune.py:976] (6/7) Epoch 9, batch 5400, loss[loss=0.151, simple_loss=0.2152, pruned_loss=0.0434, over 4726.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2643, pruned_loss=0.06797, over 955436.26 frames. ], batch size: 59, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:19:24,468 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-26 11:19:26,734 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.913e+01 1.608e+02 1.826e+02 2.251e+02 3.272e+02, threshold=3.651e+02, percent-clipped=0.0 2023-03-26 11:19:32,710 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51233.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 11:19:48,898 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:19:53,056 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1433, 1.9488, 1.8185, 2.1852, 2.7280, 2.1233, 1.9559, 1.6537], device='cuda:6'), covar=tensor([0.2035, 0.2103, 0.1833, 0.1616, 0.1808, 0.1160, 0.2274, 0.1987], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0210, 0.0208, 0.0189, 0.0243, 0.0181, 0.0215, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:19:54,350 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-26 11:20:03,231 INFO [finetune.py:976] (6/7) Epoch 9, batch 5450, loss[loss=0.1965, simple_loss=0.2606, pruned_loss=0.06621, over 4909.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2607, pruned_loss=0.06628, over 956980.24 frames. ], batch size: 35, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:20:31,733 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51302.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:20:50,279 INFO [finetune.py:976] (6/7) Epoch 9, batch 5500, loss[loss=0.2256, simple_loss=0.2948, pruned_loss=0.07818, over 4905.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2588, pruned_loss=0.06589, over 957738.77 frames. ], batch size: 43, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:20:56,825 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.581e+02 1.869e+02 2.249e+02 3.902e+02, threshold=3.738e+02, percent-clipped=2.0 2023-03-26 11:21:23,242 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3794, 1.5949, 1.2625, 1.4769, 1.7882, 1.5530, 1.4977, 1.3650], device='cuda:6'), covar=tensor([0.0339, 0.0268, 0.0564, 0.0296, 0.0188, 0.0577, 0.0278, 0.0348], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0110, 0.0141, 0.0117, 0.0103, 0.0103, 0.0092, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.1193e-05, 8.6235e-05, 1.1206e-04, 9.1541e-05, 8.0991e-05, 7.6444e-05, 6.9872e-05, 8.4048e-05], device='cuda:6') 2023-03-26 11:21:48,954 INFO [finetune.py:976] (6/7) Epoch 9, batch 5550, loss[loss=0.1914, simple_loss=0.2714, pruned_loss=0.05572, over 4923.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2627, pruned_loss=0.06762, over 958123.36 frames. ], batch size: 42, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:21:59,528 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51382.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:07,967 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51392.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:24,895 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51418.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:27,579 INFO [finetune.py:976] (6/7) Epoch 9, batch 5600, loss[loss=0.2251, simple_loss=0.2815, pruned_loss=0.08436, over 4826.00 frames. ], tot_loss[loss=0.2011, simple_loss=0.2652, pruned_loss=0.06853, over 955115.93 frames. ], batch size: 30, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:22:33,289 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.626e+02 2.005e+02 2.362e+02 4.096e+02, threshold=4.011e+02, percent-clipped=2.0 2023-03-26 11:22:36,872 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51438.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:39,813 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51443.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:46,045 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51453.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:22:52,434 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51464.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:22:57,071 INFO [finetune.py:976] (6/7) Epoch 9, batch 5650, loss[loss=0.1891, simple_loss=0.2645, pruned_loss=0.05684, over 4813.00 frames. ], tot_loss[loss=0.2042, simple_loss=0.2688, pruned_loss=0.06981, over 954502.54 frames. ], batch size: 33, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:23:05,307 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:23:10,714 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:23:20,778 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51512.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:23:26,711 INFO [finetune.py:976] (6/7) Epoch 9, batch 5700, loss[loss=0.1931, simple_loss=0.256, pruned_loss=0.06512, over 4153.00 frames. ], tot_loss[loss=0.2021, simple_loss=0.2645, pruned_loss=0.06981, over 936415.70 frames. ], batch size: 18, lr: 3.77e-03, grad_scale: 16.0 2023-03-26 11:23:30,371 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51528.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:23:32,889 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.987e+01 1.624e+02 1.963e+02 2.341e+02 6.572e+02, threshold=3.927e+02, percent-clipped=1.0 2023-03-26 11:23:35,933 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9864, 1.8380, 2.4164, 3.4086, 2.4480, 2.7280, 1.4743, 2.6727], device='cuda:6'), covar=tensor([0.1643, 0.1456, 0.1248, 0.0583, 0.0771, 0.1160, 0.1707, 0.0627], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0164, 0.0102, 0.0138, 0.0126, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:23:57,294 INFO [finetune.py:976] (6/7) Epoch 10, batch 0, loss[loss=0.1969, simple_loss=0.2686, pruned_loss=0.06259, over 4791.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2686, pruned_loss=0.06259, over 4791.00 frames. ], batch size: 25, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:23:57,294 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 11:24:16,169 INFO [finetune.py:1010] (6/7) Epoch 10, validation: loss=0.1604, simple_loss=0.2317, pruned_loss=0.04451, over 2265189.00 frames. 2023-03-26 11:24:16,170 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 11:24:16,248 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.1405, 4.5020, 4.6443, 4.8902, 4.8033, 4.6026, 5.2773, 1.5985], device='cuda:6'), covar=tensor([0.0706, 0.0793, 0.0674, 0.0834, 0.1291, 0.1518, 0.0440, 0.5355], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0246, 0.0276, 0.0293, 0.0331, 0.0283, 0.0302, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:24:22,516 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51556.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:24:33,803 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7376, 1.8747, 1.5169, 1.5374, 2.1904, 2.1429, 1.8758, 1.8374], device='cuda:6'), covar=tensor([0.0376, 0.0386, 0.0573, 0.0374, 0.0321, 0.0565, 0.0304, 0.0337], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0109, 0.0139, 0.0115, 0.0102, 0.0101, 0.0091, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0138e-05, 8.5163e-05, 1.1099e-04, 8.9977e-05, 8.0199e-05, 7.5100e-05, 6.8878e-05, 8.2737e-05], device='cuda:6') 2023-03-26 11:24:58,291 INFO [finetune.py:976] (6/7) Epoch 10, batch 50, loss[loss=0.1798, simple_loss=0.2498, pruned_loss=0.05489, over 4781.00 frames. ], tot_loss[loss=0.202, simple_loss=0.2659, pruned_loss=0.069, over 216196.45 frames. ], batch size: 29, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:25:01,612 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51602.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:25:20,210 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.916e+01 1.735e+02 2.131e+02 2.642e+02 7.480e+02, threshold=4.262e+02, percent-clipped=4.0 2023-03-26 11:25:31,993 INFO [finetune.py:976] (6/7) Epoch 10, batch 100, loss[loss=0.1534, simple_loss=0.2275, pruned_loss=0.03963, over 4762.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2599, pruned_loss=0.0663, over 381763.77 frames. ], batch size: 28, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:25:33,106 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:04,772 INFO [finetune.py:976] (6/7) Epoch 10, batch 150, loss[loss=0.2133, simple_loss=0.2814, pruned_loss=0.07267, over 4818.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2571, pruned_loss=0.06595, over 510119.85 frames. ], batch size: 38, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:26:18,720 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:33,392 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.594e+02 1.858e+02 2.240e+02 3.308e+02, threshold=3.716e+02, percent-clipped=0.0 2023-03-26 11:26:37,610 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51738.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:47,600 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=51747.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:51,481 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:26:52,000 INFO [finetune.py:976] (6/7) Epoch 10, batch 200, loss[loss=0.1969, simple_loss=0.2468, pruned_loss=0.07349, over 4813.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2588, pruned_loss=0.06805, over 607811.23 frames. ], batch size: 25, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:27:04,853 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:27:25,425 INFO [finetune.py:976] (6/7) Epoch 10, batch 250, loss[loss=0.2085, simple_loss=0.28, pruned_loss=0.06851, over 4724.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.261, pruned_loss=0.0681, over 685817.83 frames. ], batch size: 59, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:27:33,011 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=51808.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:27:45,689 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=51828.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:27:48,004 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.605e+02 1.930e+02 2.331e+02 5.576e+02, threshold=3.861e+02, percent-clipped=5.0 2023-03-26 11:27:58,881 INFO [finetune.py:976] (6/7) Epoch 10, batch 300, loss[loss=0.1866, simple_loss=0.2588, pruned_loss=0.05721, over 4813.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2647, pruned_loss=0.06847, over 746020.77 frames. ], batch size: 38, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:28:00,161 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=51851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:28:17,691 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=51876.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:28:31,953 INFO [finetune.py:976] (6/7) Epoch 10, batch 350, loss[loss=0.2324, simple_loss=0.2996, pruned_loss=0.08264, over 4885.00 frames. ], tot_loss[loss=0.2006, simple_loss=0.2651, pruned_loss=0.06811, over 793599.53 frames. ], batch size: 43, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:28:54,279 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.678e+02 2.044e+02 2.443e+02 3.814e+02, threshold=4.089e+02, percent-clipped=0.0 2023-03-26 11:29:04,644 INFO [finetune.py:976] (6/7) Epoch 10, batch 400, loss[loss=0.1876, simple_loss=0.261, pruned_loss=0.0571, over 4820.00 frames. ], tot_loss[loss=0.2, simple_loss=0.265, pruned_loss=0.06746, over 828386.21 frames. ], batch size: 47, lr: 3.76e-03, grad_scale: 16.0 2023-03-26 11:29:56,981 INFO [finetune.py:976] (6/7) Epoch 10, batch 450, loss[loss=0.2258, simple_loss=0.2844, pruned_loss=0.0836, over 4870.00 frames. ], tot_loss[loss=0.1998, simple_loss=0.265, pruned_loss=0.06735, over 857406.70 frames. ], batch size: 34, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:30:21,154 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.724e+02 2.025e+02 2.574e+02 4.346e+02, threshold=4.050e+02, percent-clipped=1.0 2023-03-26 11:30:24,977 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52038.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:30:26,192 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:30:30,947 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52048.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:30:31,463 INFO [finetune.py:976] (6/7) Epoch 10, batch 500, loss[loss=0.2064, simple_loss=0.2655, pruned_loss=0.07366, over 4810.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2623, pruned_loss=0.06649, over 878841.65 frames. ], batch size: 39, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:30:41,524 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-26 11:30:56,282 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1030, 1.8726, 1.7007, 1.7328, 2.1172, 1.8343, 2.2290, 2.0635], device='cuda:6'), covar=tensor([0.1544, 0.2477, 0.3534, 0.2967, 0.2842, 0.1839, 0.3327, 0.2160], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0188, 0.0233, 0.0253, 0.0239, 0.0196, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:30:56,809 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52086.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:02,786 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:04,604 INFO [finetune.py:976] (6/7) Epoch 10, batch 550, loss[loss=0.1967, simple_loss=0.2537, pruned_loss=0.06982, over 4911.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2594, pruned_loss=0.06533, over 898651.60 frames. ], batch size: 32, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:31:05,939 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52101.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:07,067 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52103.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:31:27,058 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.591e+02 1.822e+02 2.163e+02 6.487e+02, threshold=3.643e+02, percent-clipped=1.0 2023-03-26 11:31:32,022 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7115, 1.6513, 1.5246, 1.8115, 2.0700, 1.7266, 1.4232, 1.5194], device='cuda:6'), covar=tensor([0.1798, 0.1891, 0.1677, 0.1435, 0.1661, 0.1165, 0.2546, 0.1614], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0210, 0.0209, 0.0189, 0.0242, 0.0181, 0.0215, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:31:37,961 INFO [finetune.py:976] (6/7) Epoch 10, batch 600, loss[loss=0.2376, simple_loss=0.2962, pruned_loss=0.08949, over 4198.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2591, pruned_loss=0.06548, over 909125.17 frames. ], batch size: 65, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:31:39,262 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52151.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:32:10,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2600, 2.1724, 1.6676, 2.4080, 2.2768, 1.8124, 2.8017, 2.2807], device='cuda:6'), covar=tensor([0.1567, 0.2962, 0.3622, 0.3233, 0.2974, 0.1929, 0.3806, 0.2076], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0189, 0.0234, 0.0255, 0.0240, 0.0196, 0.0213, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:32:11,830 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7393, 1.5699, 1.5594, 1.6828, 1.1670, 4.0770, 1.6969, 2.1387], device='cuda:6'), covar=tensor([0.3391, 0.2367, 0.2043, 0.2130, 0.1743, 0.0149, 0.2522, 0.1227], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0116, 0.0099, 0.0099, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 11:32:15,959 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52193.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:32:19,559 INFO [finetune.py:976] (6/7) Epoch 10, batch 650, loss[loss=0.1559, simple_loss=0.2391, pruned_loss=0.0364, over 4821.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2614, pruned_loss=0.06623, over 918235.05 frames. ], batch size: 33, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:32:19,617 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:32:42,611 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.681e+02 1.969e+02 2.336e+02 3.855e+02, threshold=3.938e+02, percent-clipped=2.0 2023-03-26 11:32:46,415 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5219, 1.3695, 1.6727, 1.6867, 1.4537, 3.3161, 1.2270, 1.4993], device='cuda:6'), covar=tensor([0.1002, 0.1867, 0.1340, 0.1076, 0.1742, 0.0232, 0.1550, 0.1805], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0076, 0.0078, 0.0091, 0.0082, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 11:32:53,489 INFO [finetune.py:976] (6/7) Epoch 10, batch 700, loss[loss=0.2091, simple_loss=0.2897, pruned_loss=0.06424, over 4912.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2645, pruned_loss=0.0671, over 926140.60 frames. ], batch size: 42, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:32:55,071 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 11:32:56,651 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52254.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:33:26,705 INFO [finetune.py:976] (6/7) Epoch 10, batch 750, loss[loss=0.2077, simple_loss=0.2769, pruned_loss=0.06922, over 4880.00 frames. ], tot_loss[loss=0.2015, simple_loss=0.2668, pruned_loss=0.0681, over 932104.35 frames. ], batch size: 32, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:33:45,065 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52312.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:34:02,791 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.612e+02 1.864e+02 2.364e+02 4.342e+02, threshold=3.728e+02, percent-clipped=1.0 2023-03-26 11:34:15,211 INFO [finetune.py:976] (6/7) Epoch 10, batch 800, loss[loss=0.1518, simple_loss=0.2183, pruned_loss=0.04264, over 4830.00 frames. ], tot_loss[loss=0.2003, simple_loss=0.2661, pruned_loss=0.06729, over 936483.01 frames. ], batch size: 33, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:34:30,567 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52373.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:34:49,110 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:34:50,911 INFO [finetune.py:976] (6/7) Epoch 10, batch 850, loss[loss=0.1777, simple_loss=0.2362, pruned_loss=0.05963, over 4665.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2645, pruned_loss=0.06701, over 939575.62 frames. ], batch size: 23, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:34:54,131 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52403.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:35:14,904 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.556e+02 1.848e+02 2.239e+02 3.627e+02, threshold=3.695e+02, percent-clipped=0.0 2023-03-26 11:35:26,401 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 11:35:36,905 INFO [finetune.py:976] (6/7) Epoch 10, batch 900, loss[loss=0.1875, simple_loss=0.2399, pruned_loss=0.06752, over 4131.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2615, pruned_loss=0.06647, over 940745.01 frames. ], batch size: 65, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:35:38,214 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52451.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:35:59,108 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4673, 2.8720, 2.4295, 1.9470, 2.7175, 2.9797, 2.7911, 2.4236], device='cuda:6'), covar=tensor([0.0705, 0.0567, 0.0861, 0.0954, 0.0532, 0.0813, 0.0700, 0.1015], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0142, 0.0123, 0.0118, 0.0141, 0.0141, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:36:16,383 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 11:36:25,700 INFO [finetune.py:976] (6/7) Epoch 10, batch 950, loss[loss=0.2032, simple_loss=0.2725, pruned_loss=0.06689, over 4899.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2607, pruned_loss=0.06621, over 944760.75 frames. ], batch size: 35, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:36:45,061 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9549, 1.9548, 1.6475, 2.0813, 1.8820, 1.8714, 1.8767, 2.6273], device='cuda:6'), covar=tensor([0.4746, 0.6274, 0.4068, 0.5607, 0.5622, 0.2921, 0.5651, 0.1967], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0259, 0.0222, 0.0279, 0.0243, 0.0208, 0.0245, 0.0212], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:36:46,733 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.551e+02 1.918e+02 2.238e+02 5.409e+02, threshold=3.837e+02, percent-clipped=4.0 2023-03-26 11:37:01,196 INFO [finetune.py:976] (6/7) Epoch 10, batch 1000, loss[loss=0.2036, simple_loss=0.2721, pruned_loss=0.06751, over 4871.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2627, pruned_loss=0.06649, over 947382.03 frames. ], batch size: 34, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:37:01,276 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52549.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:37:04,948 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52555.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:37:05,221 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 11:37:07,357 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 11:38:00,468 INFO [finetune.py:976] (6/7) Epoch 10, batch 1050, loss[loss=0.183, simple_loss=0.2528, pruned_loss=0.05665, over 4863.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.265, pruned_loss=0.06718, over 949195.83 frames. ], batch size: 31, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:38:09,726 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6479, 1.9668, 1.5782, 1.5472, 2.0603, 2.0158, 1.9642, 1.8711], device='cuda:6'), covar=tensor([0.0437, 0.0327, 0.0528, 0.0394, 0.0309, 0.0753, 0.0278, 0.0375], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0109, 0.0139, 0.0114, 0.0102, 0.0102, 0.0091, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.0390e-05, 8.4967e-05, 1.1073e-04, 8.9827e-05, 7.9787e-05, 7.5280e-05, 6.8853e-05, 8.3132e-05], device='cuda:6') 2023-03-26 11:38:19,860 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 11:38:21,492 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52616.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 11:38:31,486 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.240e+02 1.591e+02 1.928e+02 2.293e+02 3.930e+02, threshold=3.855e+02, percent-clipped=1.0 2023-03-26 11:38:38,204 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 11:38:44,946 INFO [finetune.py:976] (6/7) Epoch 10, batch 1100, loss[loss=0.2043, simple_loss=0.2754, pruned_loss=0.0666, over 4771.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2652, pruned_loss=0.06669, over 950716.68 frames. ], batch size: 28, lr: 3.76e-03, grad_scale: 32.0 2023-03-26 11:38:59,651 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52668.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:39:17,759 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:39:19,453 INFO [finetune.py:976] (6/7) Epoch 10, batch 1150, loss[loss=0.224, simple_loss=0.2968, pruned_loss=0.07559, over 4808.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2645, pruned_loss=0.06642, over 950714.53 frames. ], batch size: 40, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:39:40,840 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.654e+02 1.930e+02 2.314e+02 4.484e+02, threshold=3.861e+02, percent-clipped=2.0 2023-03-26 11:39:48,716 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:39:52,581 INFO [finetune.py:976] (6/7) Epoch 10, batch 1200, loss[loss=0.1592, simple_loss=0.2228, pruned_loss=0.04783, over 4770.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2628, pruned_loss=0.06576, over 951976.43 frames. ], batch size: 29, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:39:57,312 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=52752.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:40:35,764 INFO [finetune.py:976] (6/7) Epoch 10, batch 1250, loss[loss=0.188, simple_loss=0.2513, pruned_loss=0.0624, over 4900.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2603, pruned_loss=0.06463, over 953502.60 frames. ], batch size: 32, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:40:47,093 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=52813.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:41:05,426 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.514e+02 1.794e+02 2.223e+02 4.744e+02, threshold=3.588e+02, percent-clipped=2.0 2023-03-26 11:41:19,445 INFO [finetune.py:976] (6/7) Epoch 10, batch 1300, loss[loss=0.2157, simple_loss=0.2681, pruned_loss=0.08168, over 4781.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.2568, pruned_loss=0.06383, over 952941.04 frames. ], batch size: 28, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:41:19,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:41:26,030 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4121, 1.9649, 2.6944, 1.7599, 2.5757, 2.5244, 1.9335, 2.6479], device='cuda:6'), covar=tensor([0.1217, 0.2129, 0.1672, 0.2411, 0.0917, 0.1778, 0.2474, 0.0959], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0204, 0.0193, 0.0191, 0.0177, 0.0215, 0.0216, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:41:27,125 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0289, 2.0568, 2.0321, 1.4889, 2.0477, 2.3088, 2.0958, 1.7726], device='cuda:6'), covar=tensor([0.0547, 0.0585, 0.0705, 0.0811, 0.0645, 0.0648, 0.0590, 0.0993], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0131, 0.0142, 0.0122, 0.0117, 0.0141, 0.0141, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:41:51,930 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=52897.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:41:53,102 INFO [finetune.py:976] (6/7) Epoch 10, batch 1350, loss[loss=0.2809, simple_loss=0.3348, pruned_loss=0.1135, over 4816.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2589, pruned_loss=0.06502, over 955098.67 frames. ], batch size: 39, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:42:02,452 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=52911.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 11:42:03,561 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5891, 1.7567, 1.9147, 1.9627, 1.7701, 3.3275, 1.4844, 1.7838], device='cuda:6'), covar=tensor([0.1027, 0.1586, 0.0995, 0.0881, 0.1455, 0.0327, 0.1469, 0.1513], device='cuda:6'), in_proj_covar=tensor([0.0077, 0.0082, 0.0076, 0.0078, 0.0092, 0.0083, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 11:42:15,924 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.950e+01 1.660e+02 2.003e+02 2.564e+02 3.985e+02, threshold=4.006e+02, percent-clipped=2.0 2023-03-26 11:42:29,792 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8248, 1.1989, 1.6871, 1.6432, 1.4832, 1.4588, 1.5492, 1.5530], device='cuda:6'), covar=tensor([0.5563, 0.5750, 0.5350, 0.5322, 0.6823, 0.5440, 0.6990, 0.5079], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0240, 0.0253, 0.0256, 0.0251, 0.0227, 0.0273, 0.0229], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:42:30,879 INFO [finetune.py:976] (6/7) Epoch 10, batch 1400, loss[loss=0.2051, simple_loss=0.2649, pruned_loss=0.07268, over 4896.00 frames. ], tot_loss[loss=0.197, simple_loss=0.262, pruned_loss=0.06601, over 952179.32 frames. ], batch size: 35, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:42:48,557 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=52968.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:43:14,419 INFO [finetune.py:976] (6/7) Epoch 10, batch 1450, loss[loss=0.2042, simple_loss=0.2777, pruned_loss=0.06535, over 4891.00 frames. ], tot_loss[loss=0.2001, simple_loss=0.2653, pruned_loss=0.06749, over 953325.59 frames. ], batch size: 32, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:43:35,022 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:43:45,117 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.605e+02 1.913e+02 2.318e+02 4.347e+02, threshold=3.826e+02, percent-clipped=3.0 2023-03-26 11:43:55,918 INFO [finetune.py:976] (6/7) Epoch 10, batch 1500, loss[loss=0.1793, simple_loss=0.2474, pruned_loss=0.05561, over 4821.00 frames. ], tot_loss[loss=0.2013, simple_loss=0.2664, pruned_loss=0.06809, over 953015.56 frames. ], batch size: 25, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:44:00,664 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:44:29,472 INFO [finetune.py:976] (6/7) Epoch 10, batch 1550, loss[loss=0.1808, simple_loss=0.2511, pruned_loss=0.05519, over 4924.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2649, pruned_loss=0.06696, over 952643.38 frames. ], batch size: 38, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:44:35,500 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53108.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:44:41,519 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53117.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:44:52,482 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.652e+02 2.005e+02 2.543e+02 4.651e+02, threshold=4.009e+02, percent-clipped=4.0 2023-03-26 11:45:03,286 INFO [finetune.py:976] (6/7) Epoch 10, batch 1600, loss[loss=0.2029, simple_loss=0.2733, pruned_loss=0.06626, over 4920.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2625, pruned_loss=0.066, over 953502.32 frames. ], batch size: 43, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:45:03,383 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5800, 1.0994, 0.9072, 1.5334, 2.0317, 1.0590, 1.3556, 1.5178], device='cuda:6'), covar=tensor([0.1503, 0.2093, 0.1973, 0.1175, 0.2036, 0.2019, 0.1502, 0.1935], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0121, 0.0095, 0.0099, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:45:08,828 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9718, 1.3866, 1.9640, 1.8610, 1.6381, 1.6295, 1.7317, 1.7515], device='cuda:6'), covar=tensor([0.4353, 0.5174, 0.4109, 0.4516, 0.5438, 0.4231, 0.5535, 0.3738], device='cuda:6'), in_proj_covar=tensor([0.0233, 0.0238, 0.0252, 0.0255, 0.0249, 0.0226, 0.0272, 0.0228], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:45:48,099 INFO [finetune.py:976] (6/7) Epoch 10, batch 1650, loss[loss=0.1926, simple_loss=0.2542, pruned_loss=0.06547, over 4380.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2598, pruned_loss=0.0652, over 954015.85 frames. ], batch size: 65, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:45:50,821 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8905, 1.5282, 1.5229, 1.1250, 1.6003, 1.6501, 1.5662, 2.2012], device='cuda:6'), covar=tensor([0.3915, 0.4301, 0.3313, 0.4006, 0.3959, 0.2413, 0.3520, 0.1695], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0259, 0.0222, 0.0279, 0.0243, 0.0208, 0.0244, 0.0212], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:45:52,558 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3131, 2.9594, 3.1034, 3.2484, 3.1196, 2.9346, 3.3375, 0.9475], device='cuda:6'), covar=tensor([0.1075, 0.0958, 0.0989, 0.1070, 0.1496, 0.1670, 0.1040, 0.5147], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0242, 0.0274, 0.0289, 0.0326, 0.0279, 0.0298, 0.0292], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:45:56,037 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53211.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:46:10,717 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.364e+01 1.592e+02 1.774e+02 2.189e+02 3.836e+02, threshold=3.549e+02, percent-clipped=0.0 2023-03-26 11:46:16,359 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:46:23,560 INFO [finetune.py:976] (6/7) Epoch 10, batch 1700, loss[loss=0.1941, simple_loss=0.2542, pruned_loss=0.06696, over 4817.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2581, pruned_loss=0.0645, over 955844.94 frames. ], batch size: 25, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:46:29,716 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53259.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:46:42,686 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0610, 1.5609, 2.3272, 3.6986, 2.6074, 2.5607, 0.7469, 2.9273], device='cuda:6'), covar=tensor([0.1612, 0.1563, 0.1306, 0.0555, 0.0796, 0.1945, 0.2120, 0.0561], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0118, 0.0134, 0.0165, 0.0102, 0.0139, 0.0127, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:46:56,427 INFO [finetune.py:976] (6/7) Epoch 10, batch 1750, loss[loss=0.2381, simple_loss=0.3029, pruned_loss=0.08662, over 4800.00 frames. ], tot_loss[loss=0.1957, simple_loss=0.2605, pruned_loss=0.06547, over 956796.73 frames. ], batch size: 51, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:46:58,861 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53302.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:00,083 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53304.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:07,251 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53315.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:10,062 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 11:47:18,948 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.172e+02 1.605e+02 1.832e+02 2.176e+02 4.638e+02, threshold=3.664e+02, percent-clipped=2.0 2023-03-26 11:47:29,901 INFO [finetune.py:976] (6/7) Epoch 10, batch 1800, loss[loss=0.2724, simple_loss=0.3228, pruned_loss=0.111, over 4901.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2628, pruned_loss=0.06611, over 956528.55 frames. ], batch size: 43, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:47:45,407 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53365.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:46,647 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=53367.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:56,118 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53376.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:47:58,645 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8892, 1.9150, 1.6259, 1.9810, 2.5439, 2.0233, 1.6447, 1.5051], device='cuda:6'), covar=tensor([0.2389, 0.2010, 0.1992, 0.1721, 0.1759, 0.1231, 0.2515, 0.1983], device='cuda:6'), in_proj_covar=tensor([0.0234, 0.0206, 0.0205, 0.0187, 0.0238, 0.0178, 0.0211, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:48:26,357 INFO [finetune.py:976] (6/7) Epoch 10, batch 1850, loss[loss=0.1978, simple_loss=0.2653, pruned_loss=0.06518, over 4862.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2648, pruned_loss=0.06691, over 956292.17 frames. ], batch size: 31, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:48:32,692 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:48:35,084 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53412.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:48:51,319 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=53428.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 11:48:58,643 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.752e+02 2.111e+02 2.637e+02 7.323e+02, threshold=4.222e+02, percent-clipped=6.0 2023-03-26 11:48:59,417 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7053, 1.5845, 1.4878, 1.4576, 1.8153, 1.4787, 1.8438, 1.7043], device='cuda:6'), covar=tensor([0.1461, 0.2292, 0.3106, 0.2504, 0.2622, 0.1724, 0.2888, 0.1923], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0189, 0.0233, 0.0254, 0.0239, 0.0196, 0.0212, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:49:10,458 INFO [finetune.py:976] (6/7) Epoch 10, batch 1900, loss[loss=0.2087, simple_loss=0.2929, pruned_loss=0.06224, over 4817.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2658, pruned_loss=0.06678, over 956675.99 frames. ], batch size: 39, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:49:10,549 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.7844, 3.3371, 3.4937, 3.6628, 3.5572, 3.3613, 3.8859, 1.1848], device='cuda:6'), covar=tensor([0.0949, 0.0859, 0.0862, 0.1089, 0.1318, 0.1583, 0.0812, 0.5425], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0246, 0.0277, 0.0294, 0.0330, 0.0283, 0.0303, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:49:14,814 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:49:26,205 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4125, 1.2753, 1.3118, 1.3007, 0.8444, 2.1740, 0.7053, 1.2278], device='cuda:6'), covar=tensor([0.3445, 0.2534, 0.2209, 0.2401, 0.1974, 0.0348, 0.2893, 0.1410], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0116, 0.0099, 0.0100, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 11:49:43,868 INFO [finetune.py:976] (6/7) Epoch 10, batch 1950, loss[loss=0.175, simple_loss=0.2489, pruned_loss=0.05056, over 4757.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2643, pruned_loss=0.06602, over 954386.43 frames. ], batch size: 27, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:50:09,884 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.535e+02 1.778e+02 2.101e+02 3.650e+02, threshold=3.555e+02, percent-clipped=0.0 2023-03-26 11:50:29,329 INFO [finetune.py:976] (6/7) Epoch 10, batch 2000, loss[loss=0.1982, simple_loss=0.2655, pruned_loss=0.06542, over 4818.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2615, pruned_loss=0.06532, over 954675.47 frames. ], batch size: 41, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:50:30,098 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1917, 1.8181, 2.1383, 2.1600, 1.8014, 1.8469, 2.0519, 1.9299], device='cuda:6'), covar=tensor([0.4800, 0.5274, 0.4222, 0.4834, 0.6287, 0.4307, 0.6235, 0.3999], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0240, 0.0253, 0.0256, 0.0251, 0.0227, 0.0274, 0.0229], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:50:31,848 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8624, 1.7208, 1.5973, 1.9080, 2.2215, 1.9205, 1.4634, 1.4908], device='cuda:6'), covar=tensor([0.2258, 0.2199, 0.1961, 0.1778, 0.1727, 0.1230, 0.2687, 0.2069], device='cuda:6'), in_proj_covar=tensor([0.0233, 0.0205, 0.0204, 0.0186, 0.0238, 0.0178, 0.0210, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:50:42,772 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 11:51:22,348 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53597.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:51:23,505 INFO [finetune.py:976] (6/7) Epoch 10, batch 2050, loss[loss=0.1807, simple_loss=0.2449, pruned_loss=0.05824, over 4826.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2573, pruned_loss=0.06352, over 955895.96 frames. ], batch size: 38, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:51:34,196 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4403, 2.1916, 1.8316, 0.9198, 1.9973, 1.8527, 1.6929, 2.0704], device='cuda:6'), covar=tensor([0.0829, 0.0864, 0.1486, 0.2061, 0.1541, 0.2281, 0.2199, 0.0961], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0202, 0.0202, 0.0188, 0.0217, 0.0208, 0.0224, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:51:44,832 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.628e+02 1.908e+02 2.249e+02 5.707e+02, threshold=3.816e+02, percent-clipped=1.0 2023-03-26 11:51:56,176 INFO [finetune.py:976] (6/7) Epoch 10, batch 2100, loss[loss=0.2925, simple_loss=0.326, pruned_loss=0.1295, over 4067.00 frames. ], tot_loss[loss=0.1921, simple_loss=0.2569, pruned_loss=0.06359, over 956462.41 frames. ], batch size: 65, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:52:03,968 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53660.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:52:13,507 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53671.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:52:24,437 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5251, 1.5373, 1.6525, 1.7395, 1.6250, 3.3731, 1.3640, 1.6094], device='cuda:6'), covar=tensor([0.1026, 0.1825, 0.1156, 0.1008, 0.1671, 0.0254, 0.1545, 0.1703], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0077, 0.0091, 0.0082, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 11:52:37,542 INFO [finetune.py:976] (6/7) Epoch 10, batch 2150, loss[loss=0.2267, simple_loss=0.2955, pruned_loss=0.07894, over 4784.00 frames. ], tot_loss[loss=0.1986, simple_loss=0.2634, pruned_loss=0.06694, over 954504.92 frames. ], batch size: 29, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:52:39,912 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2389, 2.1445, 2.2371, 1.0588, 2.5500, 2.6481, 2.1426, 2.0732], device='cuda:6'), covar=tensor([0.1056, 0.0816, 0.0530, 0.0763, 0.0417, 0.0753, 0.0528, 0.0736], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0155, 0.0120, 0.0134, 0.0131, 0.0125, 0.0144, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.5557e-05, 1.1355e-04, 8.6510e-05, 9.7034e-05, 9.3634e-05, 9.1111e-05, 1.0528e-04, 1.0677e-04], device='cuda:6') 2023-03-26 11:52:52,731 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53712.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:52:53,392 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1719, 2.0520, 1.7184, 2.1454, 2.1447, 1.8254, 2.4971, 2.2747], device='cuda:6'), covar=tensor([0.1428, 0.2553, 0.3369, 0.2979, 0.2824, 0.1810, 0.3616, 0.1781], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0188, 0.0232, 0.0252, 0.0238, 0.0195, 0.0211, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:53:08,708 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=53723.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 11:53:19,768 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.764e+02 2.057e+02 2.459e+02 5.535e+02, threshold=4.114e+02, percent-clipped=2.0 2023-03-26 11:53:19,889 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8720, 1.5416, 1.2409, 1.8973, 2.1416, 1.7213, 1.7440, 1.9152], device='cuda:6'), covar=tensor([0.1203, 0.1641, 0.1865, 0.0955, 0.1824, 0.2070, 0.1141, 0.1494], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0094, 0.0111, 0.0092, 0.0121, 0.0095, 0.0099, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:53:34,457 INFO [finetune.py:976] (6/7) Epoch 10, batch 2200, loss[loss=0.2435, simple_loss=0.2984, pruned_loss=0.09425, over 4823.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2634, pruned_loss=0.0663, over 953215.69 frames. ], batch size: 39, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:53:43,156 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:54:07,986 INFO [finetune.py:976] (6/7) Epoch 10, batch 2250, loss[loss=0.1775, simple_loss=0.2532, pruned_loss=0.05095, over 4847.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2648, pruned_loss=0.06698, over 953733.66 frames. ], batch size: 31, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:54:30,209 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.658e+02 1.958e+02 2.430e+02 3.560e+02, threshold=3.915e+02, percent-clipped=0.0 2023-03-26 11:54:35,109 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0877, 1.0526, 1.0227, 0.5469, 0.9124, 1.2080, 1.2835, 1.0101], device='cuda:6'), covar=tensor([0.0916, 0.0491, 0.0505, 0.0469, 0.0477, 0.0566, 0.0380, 0.0641], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0155, 0.0120, 0.0134, 0.0131, 0.0125, 0.0144, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.5624e-05, 1.1364e-04, 8.6693e-05, 9.7163e-05, 9.3900e-05, 9.1379e-05, 1.0556e-04, 1.0712e-04], device='cuda:6') 2023-03-26 11:54:41,557 INFO [finetune.py:976] (6/7) Epoch 10, batch 2300, loss[loss=0.1903, simple_loss=0.2589, pruned_loss=0.06083, over 4855.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2651, pruned_loss=0.06693, over 952966.54 frames. ], batch size: 31, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:54:52,740 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5148, 1.7052, 1.2938, 1.4996, 1.9932, 1.8445, 1.5283, 1.4537], device='cuda:6'), covar=tensor([0.0382, 0.0278, 0.0562, 0.0295, 0.0186, 0.0386, 0.0379, 0.0334], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0108, 0.0138, 0.0114, 0.0101, 0.0102, 0.0091, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0228e-05, 8.4546e-05, 1.1007e-04, 8.9344e-05, 7.9166e-05, 7.5413e-05, 6.8849e-05, 8.2455e-05], device='cuda:6') 2023-03-26 11:55:08,498 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 11:55:16,003 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53897.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:55:17,108 INFO [finetune.py:976] (6/7) Epoch 10, batch 2350, loss[loss=0.1564, simple_loss=0.2206, pruned_loss=0.04611, over 4820.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2629, pruned_loss=0.06584, over 954294.73 frames. ], batch size: 39, lr: 3.75e-03, grad_scale: 32.0 2023-03-26 11:55:37,300 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9657, 1.6752, 1.5140, 1.5412, 1.6261, 1.5581, 1.6359, 2.3161], device='cuda:6'), covar=tensor([0.4276, 0.4543, 0.3659, 0.4312, 0.4400, 0.2683, 0.4496, 0.1903], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0259, 0.0222, 0.0279, 0.0243, 0.0209, 0.0244, 0.0212], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:55:47,270 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.820e+01 1.627e+02 1.969e+02 2.442e+02 4.599e+02, threshold=3.938e+02, percent-clipped=2.0 2023-03-26 11:55:58,286 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=53945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:05,529 INFO [finetune.py:976] (6/7) Epoch 10, batch 2400, loss[loss=0.2368, simple_loss=0.2904, pruned_loss=0.09154, over 4866.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2609, pruned_loss=0.06579, over 954835.65 frames. ], batch size: 31, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 11:56:15,969 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53960.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:23,397 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6810, 1.5396, 2.1517, 3.5907, 2.3996, 2.4871, 1.1755, 2.8040], device='cuda:6'), covar=tensor([0.1742, 0.1495, 0.1416, 0.0590, 0.0778, 0.1346, 0.1797, 0.0590], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0102, 0.0138, 0.0127, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:56:24,630 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=53971.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:30,257 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 11:56:41,933 INFO [finetune.py:976] (6/7) Epoch 10, batch 2450, loss[loss=0.1679, simple_loss=0.2332, pruned_loss=0.05134, over 4810.00 frames. ], tot_loss[loss=0.194, simple_loss=0.258, pruned_loss=0.06502, over 954924.81 frames. ], batch size: 25, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:56:49,155 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=54008.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:56,925 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=54019.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:56:59,870 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=54023.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 11:57:05,186 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.615e+02 1.942e+02 2.268e+02 4.833e+02, threshold=3.884e+02, percent-clipped=2.0 2023-03-26 11:57:16,019 INFO [finetune.py:976] (6/7) Epoch 10, batch 2500, loss[loss=0.1853, simple_loss=0.2547, pruned_loss=0.058, over 4818.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2592, pruned_loss=0.06535, over 953827.71 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:57:22,058 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8255, 1.3474, 0.9262, 1.7704, 2.2628, 1.3661, 1.6277, 1.6924], device='cuda:6'), covar=tensor([0.1448, 0.2185, 0.1997, 0.1186, 0.1669, 0.1932, 0.1453, 0.1982], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0092, 0.0121, 0.0095, 0.0099, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:57:42,139 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=54071.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 11:58:00,117 INFO [finetune.py:976] (6/7) Epoch 10, batch 2550, loss[loss=0.1894, simple_loss=0.2552, pruned_loss=0.06184, over 4777.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.262, pruned_loss=0.06646, over 952000.60 frames. ], batch size: 28, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:58:00,242 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8420, 1.7067, 2.3051, 1.4428, 2.0163, 2.2258, 1.6335, 2.4506], device='cuda:6'), covar=tensor([0.1541, 0.2074, 0.1602, 0.2264, 0.1064, 0.1523, 0.2893, 0.1016], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0203, 0.0192, 0.0191, 0.0177, 0.0215, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:58:01,490 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3089, 2.3304, 1.8476, 0.9873, 1.9835, 1.8214, 1.6932, 2.0545], device='cuda:6'), covar=tensor([0.0864, 0.0554, 0.1449, 0.1691, 0.1416, 0.1934, 0.1848, 0.0813], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0201, 0.0187, 0.0215, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:58:02,109 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6673, 1.5192, 1.3562, 1.6776, 2.0243, 1.6755, 1.2103, 1.4000], device='cuda:6'), covar=tensor([0.2011, 0.1936, 0.1842, 0.1528, 0.1539, 0.1167, 0.2536, 0.1759], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0208, 0.0208, 0.0189, 0.0241, 0.0181, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 11:58:31,771 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.73 vs. limit=5.0 2023-03-26 11:58:35,815 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.671e+02 2.051e+02 2.356e+02 3.900e+02, threshold=4.103e+02, percent-clipped=1.0 2023-03-26 11:58:46,743 INFO [finetune.py:976] (6/7) Epoch 10, batch 2600, loss[loss=0.2049, simple_loss=0.2669, pruned_loss=0.07144, over 4708.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2639, pruned_loss=0.0672, over 951842.91 frames. ], batch size: 59, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:59:19,472 INFO [finetune.py:976] (6/7) Epoch 10, batch 2650, loss[loss=0.2102, simple_loss=0.2917, pruned_loss=0.06437, over 4937.00 frames. ], tot_loss[loss=0.1999, simple_loss=0.2652, pruned_loss=0.06727, over 951648.37 frames. ], batch size: 42, lr: 3.74e-03, grad_scale: 64.0 2023-03-26 11:59:43,743 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.585e+01 1.566e+02 1.779e+02 2.159e+02 3.883e+02, threshold=3.557e+02, percent-clipped=0.0 2023-03-26 11:59:49,934 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2123, 1.7785, 2.5502, 4.0157, 2.8771, 2.7803, 0.7086, 3.1710], device='cuda:6'), covar=tensor([0.1624, 0.1449, 0.1451, 0.0468, 0.0703, 0.1414, 0.2220, 0.0536], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0118, 0.0135, 0.0166, 0.0102, 0.0138, 0.0128, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 11:59:53,472 INFO [finetune.py:976] (6/7) Epoch 10, batch 2700, loss[loss=0.2103, simple_loss=0.2629, pruned_loss=0.07884, over 4902.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2647, pruned_loss=0.06666, over 952135.99 frames. ], batch size: 35, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:00:20,006 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.1465, 4.4550, 4.7369, 4.9573, 4.8535, 4.6247, 5.2467, 1.5445], device='cuda:6'), covar=tensor([0.0701, 0.0822, 0.0594, 0.0813, 0.1123, 0.1332, 0.0520, 0.5788], device='cuda:6'), in_proj_covar=tensor([0.0345, 0.0243, 0.0274, 0.0287, 0.0325, 0.0279, 0.0299, 0.0291], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:00:26,576 INFO [finetune.py:976] (6/7) Epoch 10, batch 2750, loss[loss=0.268, simple_loss=0.3072, pruned_loss=0.1144, over 4133.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2621, pruned_loss=0.06532, over 953055.79 frames. ], batch size: 65, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:00:50,904 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.545e+02 1.929e+02 2.415e+02 3.548e+02, threshold=3.859e+02, percent-clipped=0.0 2023-03-26 12:00:56,301 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2828, 2.5942, 2.4118, 1.6737, 2.4696, 2.7802, 2.6315, 2.2213], device='cuda:6'), covar=tensor([0.0667, 0.0604, 0.0747, 0.0985, 0.0714, 0.0691, 0.0610, 0.0986], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0132, 0.0142, 0.0123, 0.0118, 0.0141, 0.0141, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:00:56,327 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1978, 2.2449, 1.5551, 2.2817, 2.2750, 1.8640, 3.0373, 2.2674], device='cuda:6'), covar=tensor([0.1515, 0.2494, 0.3676, 0.3458, 0.2760, 0.1696, 0.2807, 0.2023], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0188, 0.0233, 0.0253, 0.0239, 0.0195, 0.0213, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:01:01,570 INFO [finetune.py:976] (6/7) Epoch 10, batch 2800, loss[loss=0.1857, simple_loss=0.2485, pruned_loss=0.06141, over 4808.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2594, pruned_loss=0.06469, over 953115.78 frames. ], batch size: 25, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:01:48,148 INFO [finetune.py:976] (6/7) Epoch 10, batch 2850, loss[loss=0.1856, simple_loss=0.2471, pruned_loss=0.06202, over 4872.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2579, pruned_loss=0.06396, over 954525.95 frames. ], batch size: 31, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:02:10,452 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.958e+01 1.628e+02 1.894e+02 2.190e+02 3.699e+02, threshold=3.787e+02, percent-clipped=0.0 2023-03-26 12:02:22,197 INFO [finetune.py:976] (6/7) Epoch 10, batch 2900, loss[loss=0.1689, simple_loss=0.2286, pruned_loss=0.05454, over 4195.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2623, pruned_loss=0.06635, over 955371.15 frames. ], batch size: 18, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:02:57,316 INFO [finetune.py:976] (6/7) Epoch 10, batch 2950, loss[loss=0.1993, simple_loss=0.273, pruned_loss=0.06281, over 4901.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2651, pruned_loss=0.06689, over 956458.12 frames. ], batch size: 35, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:03:18,745 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.146e+02 1.694e+02 2.010e+02 2.318e+02 4.609e+02, threshold=4.019e+02, percent-clipped=2.0 2023-03-26 12:03:30,490 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9918, 1.8032, 1.5584, 1.7098, 1.7109, 1.7014, 1.7128, 2.4699], device='cuda:6'), covar=tensor([0.4633, 0.5037, 0.3852, 0.4516, 0.4373, 0.2575, 0.4571, 0.1856], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0259, 0.0222, 0.0279, 0.0243, 0.0208, 0.0246, 0.0212], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:03:40,000 INFO [finetune.py:976] (6/7) Epoch 10, batch 3000, loss[loss=0.1921, simple_loss=0.2574, pruned_loss=0.06335, over 4918.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2667, pruned_loss=0.06708, over 955278.33 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:03:40,001 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 12:03:57,342 INFO [finetune.py:1010] (6/7) Epoch 10, validation: loss=0.1584, simple_loss=0.2295, pruned_loss=0.04366, over 2265189.00 frames. 2023-03-26 12:03:57,342 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 12:04:29,066 INFO [finetune.py:976] (6/7) Epoch 10, batch 3050, loss[loss=0.2604, simple_loss=0.3167, pruned_loss=0.1021, over 4692.00 frames. ], tot_loss[loss=0.1999, simple_loss=0.2662, pruned_loss=0.06682, over 956315.44 frames. ], batch size: 59, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:04:31,767 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.27 vs. limit=5.0 2023-03-26 12:04:43,198 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6323, 1.9263, 1.5486, 1.5005, 2.1610, 2.1770, 1.9225, 1.8706], device='cuda:6'), covar=tensor([0.0436, 0.0362, 0.0580, 0.0401, 0.0318, 0.0630, 0.0308, 0.0406], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0108, 0.0137, 0.0113, 0.0101, 0.0102, 0.0091, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0542e-05, 8.4548e-05, 1.0932e-04, 8.8617e-05, 7.8842e-05, 7.5697e-05, 6.8835e-05, 8.2172e-05], device='cuda:6') 2023-03-26 12:04:52,093 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.606e+02 1.839e+02 2.259e+02 4.011e+02, threshold=3.679e+02, percent-clipped=0.0 2023-03-26 12:04:56,391 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1903, 1.7716, 2.1301, 2.0421, 1.7983, 1.7865, 1.9669, 1.8741], device='cuda:6'), covar=tensor([0.4470, 0.5072, 0.3908, 0.4604, 0.5935, 0.4714, 0.5871, 0.3979], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0239, 0.0252, 0.0256, 0.0251, 0.0227, 0.0274, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:05:02,813 INFO [finetune.py:976] (6/7) Epoch 10, batch 3100, loss[loss=0.1679, simple_loss=0.2364, pruned_loss=0.04969, over 4770.00 frames. ], tot_loss[loss=0.1991, simple_loss=0.2647, pruned_loss=0.06671, over 957960.82 frames. ], batch size: 26, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:05:13,586 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 12:05:14,423 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=54664.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:05:36,458 INFO [finetune.py:976] (6/7) Epoch 10, batch 3150, loss[loss=0.1481, simple_loss=0.2215, pruned_loss=0.03736, over 4782.00 frames. ], tot_loss[loss=0.1957, simple_loss=0.2609, pruned_loss=0.06524, over 956497.76 frames. ], batch size: 26, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:05:54,649 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=54725.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:05:56,656 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 12:05:59,385 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.653e+02 1.993e+02 2.298e+02 5.311e+02, threshold=3.986e+02, percent-clipped=2.0 2023-03-26 12:06:10,109 INFO [finetune.py:976] (6/7) Epoch 10, batch 3200, loss[loss=0.1827, simple_loss=0.231, pruned_loss=0.06721, over 4816.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2583, pruned_loss=0.06472, over 958430.16 frames. ], batch size: 25, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:06:53,305 INFO [finetune.py:976] (6/7) Epoch 10, batch 3250, loss[loss=0.2042, simple_loss=0.2807, pruned_loss=0.06379, over 4892.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2587, pruned_loss=0.06511, over 958214.70 frames. ], batch size: 35, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:07:26,175 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.711e+02 2.094e+02 2.546e+02 5.601e+02, threshold=4.189e+02, percent-clipped=2.0 2023-03-26 12:07:46,362 INFO [finetune.py:976] (6/7) Epoch 10, batch 3300, loss[loss=0.2136, simple_loss=0.2923, pruned_loss=0.06749, over 4817.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2624, pruned_loss=0.06638, over 956888.66 frames. ], batch size: 38, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:07:51,745 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.44 vs. limit=5.0 2023-03-26 12:07:55,825 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=54862.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:08:18,161 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.99 vs. limit=5.0 2023-03-26 12:08:20,114 INFO [finetune.py:976] (6/7) Epoch 10, batch 3350, loss[loss=0.2171, simple_loss=0.2785, pruned_loss=0.07781, over 4787.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2648, pruned_loss=0.06687, over 956823.82 frames. ], batch size: 54, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:08:48,196 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=54923.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 12:08:49,315 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2786, 1.7057, 1.6841, 0.9589, 1.8657, 2.1429, 1.8787, 1.7360], device='cuda:6'), covar=tensor([0.0855, 0.0688, 0.0534, 0.0644, 0.0535, 0.0493, 0.0516, 0.0621], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0154, 0.0121, 0.0133, 0.0131, 0.0124, 0.0145, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.5070e-05, 1.1336e-04, 8.7291e-05, 9.6032e-05, 9.3758e-05, 9.0754e-05, 1.0598e-04, 1.0771e-04], device='cuda:6') 2023-03-26 12:08:57,770 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.693e+02 1.965e+02 2.442e+02 4.084e+02, threshold=3.930e+02, percent-clipped=0.0 2023-03-26 12:09:07,543 INFO [finetune.py:976] (6/7) Epoch 10, batch 3400, loss[loss=0.1936, simple_loss=0.2694, pruned_loss=0.05889, over 4804.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2646, pruned_loss=0.06662, over 954291.71 frames. ], batch size: 40, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:09:56,947 INFO [finetune.py:976] (6/7) Epoch 10, batch 3450, loss[loss=0.191, simple_loss=0.252, pruned_loss=0.06503, over 4813.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2638, pruned_loss=0.06574, over 957022.64 frames. ], batch size: 25, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:09:57,080 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9964, 1.8696, 1.6143, 2.0358, 2.4430, 2.0934, 1.5139, 1.6041], device='cuda:6'), covar=tensor([0.2041, 0.1900, 0.1843, 0.1582, 0.1600, 0.1083, 0.2494, 0.1960], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0206, 0.0206, 0.0188, 0.0238, 0.0180, 0.0212, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:10:00,799 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3747, 3.8028, 4.0208, 4.2025, 4.1392, 3.8932, 4.4601, 1.3436], device='cuda:6'), covar=tensor([0.0692, 0.0666, 0.0693, 0.0903, 0.0975, 0.1322, 0.0563, 0.5289], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0243, 0.0274, 0.0289, 0.0328, 0.0282, 0.0300, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:10:16,052 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55020.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:10:30,757 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.349e+01 1.534e+02 1.957e+02 2.350e+02 5.428e+02, threshold=3.914e+02, percent-clipped=3.0 2023-03-26 12:10:51,434 INFO [finetune.py:976] (6/7) Epoch 10, batch 3500, loss[loss=0.1868, simple_loss=0.242, pruned_loss=0.06581, over 4833.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2608, pruned_loss=0.06464, over 957710.13 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:11:30,722 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 12:11:30,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2043, 1.7171, 2.5132, 3.7760, 2.7118, 2.6624, 0.7497, 3.0136], device='cuda:6'), covar=tensor([0.1627, 0.1538, 0.1374, 0.0536, 0.0703, 0.1793, 0.2156, 0.0489], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0164, 0.0100, 0.0137, 0.0126, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:11:35,834 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 12:11:36,220 INFO [finetune.py:976] (6/7) Epoch 10, batch 3550, loss[loss=0.1804, simple_loss=0.2379, pruned_loss=0.06142, over 4931.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2577, pruned_loss=0.06395, over 956702.52 frames. ], batch size: 33, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:11:58,574 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.369e+01 1.530e+02 1.896e+02 2.280e+02 4.793e+02, threshold=3.791e+02, percent-clipped=5.0 2023-03-26 12:12:04,012 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 12:12:09,218 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:12:09,742 INFO [finetune.py:976] (6/7) Epoch 10, batch 3600, loss[loss=0.2349, simple_loss=0.2878, pruned_loss=0.09094, over 4824.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2562, pruned_loss=0.06407, over 955845.42 frames. ], batch size: 40, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:12:42,942 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9623, 1.5649, 2.3898, 3.5907, 2.4986, 2.6833, 0.8676, 2.7621], device='cuda:6'), covar=tensor([0.1704, 0.1613, 0.1411, 0.0632, 0.0790, 0.1886, 0.2181, 0.0617], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0163, 0.0100, 0.0136, 0.0125, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:12:43,417 INFO [finetune.py:976] (6/7) Epoch 10, batch 3650, loss[loss=0.2292, simple_loss=0.2931, pruned_loss=0.08268, over 4733.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2576, pruned_loss=0.06452, over 955801.12 frames. ], batch size: 54, lr: 3.74e-03, grad_scale: 32.0 2023-03-26 12:12:49,708 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:12:52,564 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55211.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:12:56,777 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55218.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:13:14,908 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.616e+02 1.938e+02 2.270e+02 4.700e+02, threshold=3.875e+02, percent-clipped=1.0 2023-03-26 12:13:26,511 INFO [finetune.py:976] (6/7) Epoch 10, batch 3700, loss[loss=0.1834, simple_loss=0.2567, pruned_loss=0.05507, over 4929.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.262, pruned_loss=0.06628, over 955004.37 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:13:40,332 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:14:00,006 INFO [finetune.py:976] (6/7) Epoch 10, batch 3750, loss[loss=0.2659, simple_loss=0.3167, pruned_loss=0.1076, over 4891.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2629, pruned_loss=0.06638, over 953955.42 frames. ], batch size: 35, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:14:16,755 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55320.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:14:33,805 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.583e+02 1.835e+02 2.150e+02 3.880e+02, threshold=3.669e+02, percent-clipped=1.0 2023-03-26 12:14:45,563 INFO [finetune.py:976] (6/7) Epoch 10, batch 3800, loss[loss=0.1786, simple_loss=0.2535, pruned_loss=0.05187, over 4789.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2651, pruned_loss=0.06711, over 953299.37 frames. ], batch size: 29, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:14:54,172 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0188, 1.4252, 0.7487, 2.0962, 2.3468, 1.8520, 1.5726, 1.8522], device='cuda:6'), covar=tensor([0.1334, 0.1917, 0.2133, 0.1065, 0.1853, 0.1951, 0.1409, 0.1917], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0093, 0.0123, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:14:57,818 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55368.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:15:27,046 INFO [finetune.py:976] (6/7) Epoch 10, batch 3850, loss[loss=0.1951, simple_loss=0.2489, pruned_loss=0.07061, over 4144.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2648, pruned_loss=0.06705, over 953231.15 frames. ], batch size: 18, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:15:49,874 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.578e+02 1.920e+02 2.344e+02 4.809e+02, threshold=3.839e+02, percent-clipped=3.0 2023-03-26 12:15:56,259 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5355, 2.6089, 2.3893, 1.9077, 2.8693, 2.8222, 2.8921, 2.0869], device='cuda:6'), covar=tensor([0.0779, 0.0836, 0.0981, 0.1139, 0.0661, 0.0946, 0.0829, 0.1604], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0133, 0.0143, 0.0124, 0.0120, 0.0143, 0.0143, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:16:01,517 INFO [finetune.py:976] (6/7) Epoch 10, batch 3900, loss[loss=0.1993, simple_loss=0.265, pruned_loss=0.06679, over 4744.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.261, pruned_loss=0.0653, over 954813.01 frames. ], batch size: 54, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:16:14,916 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8378, 1.0059, 1.8005, 1.7005, 1.5402, 1.4980, 1.5815, 1.6620], device='cuda:6'), covar=tensor([0.4234, 0.4682, 0.3911, 0.4137, 0.5494, 0.4139, 0.5169, 0.3887], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0239, 0.0253, 0.0257, 0.0252, 0.0229, 0.0274, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:16:15,505 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8086, 1.7113, 1.8464, 1.2594, 1.8544, 1.9643, 1.8598, 1.5427], device='cuda:6'), covar=tensor([0.0430, 0.0522, 0.0559, 0.0735, 0.0731, 0.0465, 0.0461, 0.0905], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0133, 0.0143, 0.0124, 0.0120, 0.0142, 0.0143, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:16:44,730 INFO [finetune.py:976] (6/7) Epoch 10, batch 3950, loss[loss=0.1922, simple_loss=0.2467, pruned_loss=0.06883, over 4782.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2579, pruned_loss=0.0645, over 957140.14 frames. ], batch size: 29, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:16:48,766 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:16:58,366 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55518.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:17:13,739 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.580e+02 1.857e+02 2.217e+02 3.906e+02, threshold=3.714e+02, percent-clipped=1.0 2023-03-26 12:17:35,772 INFO [finetune.py:976] (6/7) Epoch 10, batch 4000, loss[loss=0.2245, simple_loss=0.2859, pruned_loss=0.08154, over 4840.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2582, pruned_loss=0.0648, over 958396.95 frames. ], batch size: 39, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:17:48,291 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55566.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:17:48,913 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=55567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:18:02,231 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6814, 3.8524, 3.5983, 1.7153, 3.9793, 2.9034, 0.6642, 2.6255], device='cuda:6'), covar=tensor([0.2628, 0.1967, 0.1473, 0.3625, 0.1039, 0.1073, 0.5101, 0.1525], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0173, 0.0159, 0.0128, 0.0155, 0.0122, 0.0146, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:18:09,101 INFO [finetune.py:976] (6/7) Epoch 10, batch 4050, loss[loss=0.2244, simple_loss=0.2892, pruned_loss=0.07979, over 4748.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2614, pruned_loss=0.06604, over 956160.72 frames. ], batch size: 54, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:18:34,631 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.736e+02 2.138e+02 2.510e+02 4.140e+02, threshold=4.276e+02, percent-clipped=4.0 2023-03-26 12:18:44,821 INFO [finetune.py:976] (6/7) Epoch 10, batch 4100, loss[loss=0.1794, simple_loss=0.25, pruned_loss=0.05443, over 4837.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.2645, pruned_loss=0.06711, over 956937.55 frames. ], batch size: 30, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:19:12,873 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8833, 1.7796, 1.5243, 1.4532, 1.8974, 1.6213, 1.8561, 1.8259], device='cuda:6'), covar=tensor([0.1570, 0.2412, 0.3532, 0.2890, 0.3001, 0.1966, 0.2928, 0.2200], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0188, 0.0233, 0.0253, 0.0240, 0.0196, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:19:17,498 INFO [finetune.py:976] (6/7) Epoch 10, batch 4150, loss[loss=0.2287, simple_loss=0.291, pruned_loss=0.08315, over 4790.00 frames. ], tot_loss[loss=0.1997, simple_loss=0.2655, pruned_loss=0.0669, over 957228.26 frames. ], batch size: 51, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:19:49,996 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.688e+02 2.035e+02 2.462e+02 3.895e+02, threshold=4.069e+02, percent-clipped=0.0 2023-03-26 12:19:59,121 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55748.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:19:59,614 INFO [finetune.py:976] (6/7) Epoch 10, batch 4200, loss[loss=0.1618, simple_loss=0.226, pruned_loss=0.04877, over 4790.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2647, pruned_loss=0.06633, over 955186.24 frames. ], batch size: 29, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:20:53,374 INFO [finetune.py:976] (6/7) Epoch 10, batch 4250, loss[loss=0.2017, simple_loss=0.2541, pruned_loss=0.07463, over 4684.00 frames. ], tot_loss[loss=0.1976, simple_loss=0.2631, pruned_loss=0.06598, over 956170.90 frames. ], batch size: 23, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:20:57,625 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:21:06,160 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55809.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:21:38,837 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.929e+01 1.631e+02 1.908e+02 2.201e+02 4.056e+02, threshold=3.816e+02, percent-clipped=0.0 2023-03-26 12:21:48,081 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=55848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:21:48,567 INFO [finetune.py:976] (6/7) Epoch 10, batch 4300, loss[loss=0.1976, simple_loss=0.255, pruned_loss=0.07008, over 4796.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2598, pruned_loss=0.0647, over 956115.94 frames. ], batch size: 26, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:21:50,381 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:22:10,863 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=55867.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:22:28,655 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1170, 3.6123, 3.7646, 3.8887, 3.8732, 3.6924, 4.1770, 1.4113], device='cuda:6'), covar=tensor([0.0732, 0.0749, 0.0727, 0.0950, 0.1125, 0.1335, 0.0641, 0.5214], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0245, 0.0277, 0.0292, 0.0331, 0.0285, 0.0302, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:22:36,979 INFO [finetune.py:976] (6/7) Epoch 10, batch 4350, loss[loss=0.1531, simple_loss=0.2144, pruned_loss=0.04589, over 4802.00 frames. ], tot_loss[loss=0.1921, simple_loss=0.2567, pruned_loss=0.06375, over 955033.86 frames. ], batch size: 23, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:22:47,077 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 12:22:48,867 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=55909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:22:58,948 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=55915.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:23:09,977 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1618, 1.9143, 2.0225, 0.9360, 2.2752, 2.3743, 2.0432, 1.8197], device='cuda:6'), covar=tensor([0.1113, 0.0901, 0.0451, 0.0795, 0.0407, 0.0741, 0.0544, 0.0919], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0154, 0.0121, 0.0133, 0.0131, 0.0125, 0.0144, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.4916e-05, 1.1330e-04, 8.7483e-05, 9.6286e-05, 9.3818e-05, 9.0850e-05, 1.0561e-04, 1.0687e-04], device='cuda:6') 2023-03-26 12:23:23,258 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.622e+02 1.928e+02 2.410e+02 3.855e+02, threshold=3.856e+02, percent-clipped=1.0 2023-03-26 12:23:37,719 INFO [finetune.py:976] (6/7) Epoch 10, batch 4400, loss[loss=0.206, simple_loss=0.2744, pruned_loss=0.06881, over 4829.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2582, pruned_loss=0.06444, over 953797.31 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:23:57,483 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0568, 0.9134, 0.9293, 1.0659, 1.1631, 1.1361, 1.0044, 0.9597], device='cuda:6'), covar=tensor([0.0329, 0.0342, 0.0569, 0.0327, 0.0285, 0.0453, 0.0310, 0.0388], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0109, 0.0138, 0.0113, 0.0100, 0.0102, 0.0091, 0.0106], device='cuda:6'), out_proj_covar=tensor([7.0424e-05, 8.5401e-05, 1.1013e-04, 8.8932e-05, 7.8498e-05, 7.5751e-05, 6.9272e-05, 8.1874e-05], device='cuda:6') 2023-03-26 12:24:09,594 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 12:24:11,804 INFO [finetune.py:976] (6/7) Epoch 10, batch 4450, loss[loss=0.154, simple_loss=0.2267, pruned_loss=0.04062, over 4727.00 frames. ], tot_loss[loss=0.1954, simple_loss=0.2611, pruned_loss=0.06489, over 951976.11 frames. ], batch size: 23, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:24:36,683 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.562e+02 1.840e+02 2.258e+02 4.729e+02, threshold=3.681e+02, percent-clipped=2.0 2023-03-26 12:24:46,909 INFO [finetune.py:976] (6/7) Epoch 10, batch 4500, loss[loss=0.188, simple_loss=0.2754, pruned_loss=0.05033, over 4807.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2641, pruned_loss=0.06617, over 954247.07 frames. ], batch size: 40, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:25:19,742 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 12:25:31,191 INFO [finetune.py:976] (6/7) Epoch 10, batch 4550, loss[loss=0.1944, simple_loss=0.2613, pruned_loss=0.06371, over 4858.00 frames. ], tot_loss[loss=0.2, simple_loss=0.266, pruned_loss=0.06702, over 954093.20 frames. ], batch size: 31, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:25:33,147 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8004, 1.6916, 1.5252, 1.3291, 1.8239, 1.5753, 1.7559, 1.7795], device='cuda:6'), covar=tensor([0.1365, 0.1995, 0.3061, 0.2563, 0.2521, 0.1674, 0.2302, 0.1759], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0187, 0.0232, 0.0253, 0.0239, 0.0196, 0.0211, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:25:34,314 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56104.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:25:41,055 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6192, 2.3730, 1.8756, 0.8679, 2.0294, 2.0316, 1.8060, 2.1429], device='cuda:6'), covar=tensor([0.0752, 0.0873, 0.1656, 0.2350, 0.1660, 0.2357, 0.2358, 0.1005], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0201, 0.0203, 0.0188, 0.0218, 0.0209, 0.0225, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:25:53,263 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.686e+02 1.941e+02 2.447e+02 3.858e+02, threshold=3.882e+02, percent-clipped=3.0 2023-03-26 12:26:02,540 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56145.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:26:04,874 INFO [finetune.py:976] (6/7) Epoch 10, batch 4600, loss[loss=0.1611, simple_loss=0.2251, pruned_loss=0.04853, over 4757.00 frames. ], tot_loss[loss=0.1993, simple_loss=0.2649, pruned_loss=0.06683, over 953329.45 frames. ], batch size: 23, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:26:15,156 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-26 12:26:40,464 INFO [finetune.py:976] (6/7) Epoch 10, batch 4650, loss[loss=0.1591, simple_loss=0.2226, pruned_loss=0.04777, over 4914.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.262, pruned_loss=0.06615, over 953277.80 frames. ], batch size: 37, lr: 3.73e-03, grad_scale: 32.0 2023-03-26 12:26:43,721 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:26:46,311 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:26:56,612 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7604, 3.8612, 3.7548, 1.8052, 3.9893, 2.9882, 0.7505, 2.7058], device='cuda:6'), covar=tensor([0.2345, 0.1854, 0.1376, 0.3380, 0.0986, 0.0970, 0.4568, 0.1529], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0159, 0.0129, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:27:11,819 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.561e+02 1.851e+02 2.355e+02 3.865e+02, threshold=3.702e+02, percent-clipped=0.0 2023-03-26 12:27:23,135 INFO [finetune.py:976] (6/7) Epoch 10, batch 4700, loss[loss=0.1585, simple_loss=0.2299, pruned_loss=0.04357, over 4823.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2585, pruned_loss=0.06488, over 952806.94 frames. ], batch size: 40, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:28:03,986 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7619, 1.6906, 1.4577, 1.3636, 1.8015, 1.4935, 1.8324, 1.7458], device='cuda:6'), covar=tensor([0.1565, 0.2190, 0.3346, 0.2607, 0.2876, 0.1950, 0.2802, 0.1951], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0187, 0.0232, 0.0252, 0.0239, 0.0196, 0.0211, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:28:09,016 INFO [finetune.py:976] (6/7) Epoch 10, batch 4750, loss[loss=0.1711, simple_loss=0.2313, pruned_loss=0.05547, over 4790.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2576, pruned_loss=0.06512, over 953259.26 frames. ], batch size: 29, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:28:15,123 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6358, 1.4587, 2.2323, 3.0202, 2.0563, 2.3214, 1.1585, 2.4589], device='cuda:6'), covar=tensor([0.1601, 0.1361, 0.1053, 0.0577, 0.0795, 0.1446, 0.1662, 0.0512], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:28:16,373 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5810, 1.5354, 1.4652, 1.5606, 1.0871, 3.3060, 1.2556, 1.7593], device='cuda:6'), covar=tensor([0.3174, 0.2272, 0.2016, 0.2232, 0.1717, 0.0198, 0.2841, 0.1344], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0116, 0.0098, 0.0099, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 12:28:28,572 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4662, 1.4611, 1.3158, 1.3868, 1.8368, 1.7203, 1.5774, 1.2538], device='cuda:6'), covar=tensor([0.0334, 0.0387, 0.0509, 0.0343, 0.0200, 0.0512, 0.0279, 0.0437], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0110, 0.0140, 0.0114, 0.0101, 0.0103, 0.0092, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0935e-05, 8.5775e-05, 1.1108e-04, 8.9383e-05, 7.8821e-05, 7.6383e-05, 6.9585e-05, 8.2696e-05], device='cuda:6') 2023-03-26 12:28:30,218 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.456e+02 1.800e+02 2.273e+02 6.888e+02, threshold=3.601e+02, percent-clipped=2.0 2023-03-26 12:28:42,335 INFO [finetune.py:976] (6/7) Epoch 10, batch 4800, loss[loss=0.2129, simple_loss=0.2785, pruned_loss=0.07369, over 4925.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2605, pruned_loss=0.06668, over 952137.84 frames. ], batch size: 38, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:28:57,436 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5969, 1.9904, 1.5157, 1.5829, 2.2255, 1.9863, 1.9094, 1.8083], device='cuda:6'), covar=tensor([0.0483, 0.0314, 0.0535, 0.0340, 0.0258, 0.0717, 0.0366, 0.0369], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0110, 0.0139, 0.0114, 0.0100, 0.0103, 0.0092, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0844e-05, 8.5598e-05, 1.1062e-04, 8.9343e-05, 7.8499e-05, 7.6383e-05, 6.9361e-05, 8.2579e-05], device='cuda:6') 2023-03-26 12:29:09,688 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2705, 1.6678, 0.8555, 2.2291, 2.6068, 1.7796, 2.0434, 2.0734], device='cuda:6'), covar=tensor([0.1380, 0.1930, 0.2174, 0.1115, 0.1713, 0.2063, 0.1329, 0.1947], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0097, 0.0114, 0.0093, 0.0122, 0.0095, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:29:12,063 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3364, 1.3951, 1.4092, 1.5266, 1.4827, 3.0080, 1.3477, 1.5142], device='cuda:6'), covar=tensor([0.1092, 0.1797, 0.1241, 0.0994, 0.1687, 0.0305, 0.1475, 0.1761], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0076, 0.0078, 0.0092, 0.0082, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 12:29:14,947 INFO [finetune.py:976] (6/7) Epoch 10, batch 4850, loss[loss=0.231, simple_loss=0.2986, pruned_loss=0.08174, over 4925.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2642, pruned_loss=0.06743, over 954040.08 frames. ], batch size: 33, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:29:19,688 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56404.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:29:26,343 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0741, 1.9645, 1.9037, 2.1174, 2.7619, 2.1170, 2.0186, 1.5451], device='cuda:6'), covar=tensor([0.2446, 0.2267, 0.2040, 0.1766, 0.1922, 0.1254, 0.2301, 0.2053], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0207, 0.0207, 0.0188, 0.0240, 0.0181, 0.0213, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:29:37,026 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.738e+02 2.004e+02 2.451e+02 5.164e+02, threshold=4.009e+02, percent-clipped=2.0 2023-03-26 12:29:40,751 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 12:29:48,224 INFO [finetune.py:976] (6/7) Epoch 10, batch 4900, loss[loss=0.191, simple_loss=0.268, pruned_loss=0.05705, over 4735.00 frames. ], tot_loss[loss=0.2004, simple_loss=0.2659, pruned_loss=0.06743, over 956041.30 frames. ], batch size: 59, lr: 3.73e-03, grad_scale: 64.0 2023-03-26 12:29:50,493 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=56452.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:30:12,076 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1655, 2.0668, 2.3869, 1.7069, 2.2269, 2.4015, 2.3128, 1.8264], device='cuda:6'), covar=tensor([0.0694, 0.0721, 0.0635, 0.0894, 0.0545, 0.0719, 0.0657, 0.1069], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0134, 0.0144, 0.0126, 0.0121, 0.0144, 0.0144, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:30:26,422 INFO [finetune.py:976] (6/7) Epoch 10, batch 4950, loss[loss=0.191, simple_loss=0.2516, pruned_loss=0.06521, over 4721.00 frames. ], tot_loss[loss=0.2007, simple_loss=0.2665, pruned_loss=0.06751, over 953609.48 frames. ], batch size: 23, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:30:32,457 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:30:34,876 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:30:49,797 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56523.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:30:55,720 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.612e+02 1.965e+02 2.275e+02 4.231e+02, threshold=3.931e+02, percent-clipped=1.0 2023-03-26 12:31:06,810 INFO [finetune.py:976] (6/7) Epoch 10, batch 5000, loss[loss=0.1784, simple_loss=0.2462, pruned_loss=0.05533, over 4833.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2621, pruned_loss=0.06547, over 953423.75 frames. ], batch size: 47, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:31:08,677 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=56552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:31:08,702 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.9042, 4.2142, 4.4392, 4.6693, 4.6113, 4.3586, 5.0085, 1.5974], device='cuda:6'), covar=tensor([0.0669, 0.0766, 0.0692, 0.0803, 0.1212, 0.1784, 0.0584, 0.5566], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0246, 0.0278, 0.0292, 0.0334, 0.0287, 0.0303, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:31:29,657 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56584.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:31:31,503 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7591, 2.9699, 2.7800, 2.1390, 2.9226, 3.1387, 3.0685, 2.5606], device='cuda:6'), covar=tensor([0.0695, 0.0575, 0.0718, 0.0879, 0.0585, 0.0726, 0.0661, 0.0939], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0144, 0.0125, 0.0120, 0.0144, 0.0143, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:31:39,220 INFO [finetune.py:976] (6/7) Epoch 10, batch 5050, loss[loss=0.1891, simple_loss=0.2505, pruned_loss=0.06385, over 4817.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2592, pruned_loss=0.06462, over 954375.44 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:32:04,810 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.243e+02 1.580e+02 1.790e+02 2.049e+02 5.062e+02, threshold=3.579e+02, percent-clipped=1.0 2023-03-26 12:32:14,687 INFO [finetune.py:976] (6/7) Epoch 10, batch 5100, loss[loss=0.2032, simple_loss=0.2577, pruned_loss=0.07439, over 4295.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2548, pruned_loss=0.06261, over 954729.02 frames. ], batch size: 18, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:32:18,883 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.59 vs. limit=5.0 2023-03-26 12:32:44,344 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4923, 1.5771, 1.2601, 1.4899, 1.8567, 1.6806, 1.5249, 1.3428], device='cuda:6'), covar=tensor([0.0391, 0.0288, 0.0631, 0.0290, 0.0192, 0.0445, 0.0347, 0.0398], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0110, 0.0139, 0.0114, 0.0101, 0.0102, 0.0092, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0921e-05, 8.5684e-05, 1.1064e-04, 8.9233e-05, 7.8822e-05, 7.5948e-05, 6.9472e-05, 8.2705e-05], device='cuda:6') 2023-03-26 12:32:55,111 INFO [finetune.py:976] (6/7) Epoch 10, batch 5150, loss[loss=0.2011, simple_loss=0.2554, pruned_loss=0.07345, over 4152.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.256, pruned_loss=0.06389, over 953095.39 frames. ], batch size: 18, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:33:04,195 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 12:33:11,725 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 12:33:24,252 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5164, 1.4431, 1.7252, 1.8365, 1.6888, 3.4198, 1.3878, 1.5459], device='cuda:6'), covar=tensor([0.0986, 0.1744, 0.1044, 0.0899, 0.1481, 0.0248, 0.1433, 0.1713], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0091, 0.0082, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 12:33:27,185 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.919e+01 1.632e+02 1.974e+02 2.331e+02 5.610e+02, threshold=3.948e+02, percent-clipped=3.0 2023-03-26 12:33:36,883 INFO [finetune.py:976] (6/7) Epoch 10, batch 5200, loss[loss=0.1808, simple_loss=0.2486, pruned_loss=0.05656, over 4745.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2591, pruned_loss=0.06512, over 950278.71 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:33:39,191 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 12:34:10,223 INFO [finetune.py:976] (6/7) Epoch 10, batch 5250, loss[loss=0.2115, simple_loss=0.2813, pruned_loss=0.07087, over 4773.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.262, pruned_loss=0.06588, over 950409.81 frames. ], batch size: 54, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:34:11,648 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=56801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:34:12,866 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56803.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:34:26,273 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-26 12:34:34,254 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.706e+02 2.047e+02 2.503e+02 5.084e+02, threshold=4.093e+02, percent-clipped=2.0 2023-03-26 12:34:43,965 INFO [finetune.py:976] (6/7) Epoch 10, batch 5300, loss[loss=0.2069, simple_loss=0.2841, pruned_loss=0.06487, over 4865.00 frames. ], tot_loss[loss=0.1995, simple_loss=0.2648, pruned_loss=0.06711, over 951068.53 frames. ], batch size: 31, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:34:44,027 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=56849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:34:45,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0932, 4.9221, 4.6574, 2.8530, 5.0388, 3.7245, 1.1070, 3.4523], device='cuda:6'), covar=tensor([0.1997, 0.1244, 0.1203, 0.2365, 0.0596, 0.0791, 0.3974, 0.1154], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0160, 0.0129, 0.0156, 0.0122, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:34:47,161 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2868, 1.5193, 2.1453, 2.0225, 1.8973, 1.8286, 1.9757, 2.0090], device='cuda:6'), covar=tensor([0.3284, 0.3944, 0.3677, 0.3877, 0.4875, 0.3799, 0.4560, 0.3347], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0240, 0.0254, 0.0259, 0.0253, 0.0230, 0.0275, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:34:53,684 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56864.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:35:04,627 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=56879.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 12:35:17,629 INFO [finetune.py:976] (6/7) Epoch 10, batch 5350, loss[loss=0.1859, simple_loss=0.2617, pruned_loss=0.05499, over 4778.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2644, pruned_loss=0.0665, over 953155.20 frames. ], batch size: 29, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:35:23,292 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56908.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:35:25,321 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.69 vs. limit=2.0 2023-03-26 12:35:49,117 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.924e+01 1.584e+02 1.842e+02 2.192e+02 3.665e+02, threshold=3.684e+02, percent-clipped=0.0 2023-03-26 12:36:02,277 INFO [finetune.py:976] (6/7) Epoch 10, batch 5400, loss[loss=0.1451, simple_loss=0.2127, pruned_loss=0.03874, over 4760.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2615, pruned_loss=0.0656, over 953479.72 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:36:06,092 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-26 12:36:15,030 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=56969.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:36:15,834 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 12:36:32,494 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56993.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:36:33,664 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=56995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:36:35,991 INFO [finetune.py:976] (6/7) Epoch 10, batch 5450, loss[loss=0.213, simple_loss=0.271, pruned_loss=0.07754, over 4823.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2584, pruned_loss=0.06479, over 954329.44 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:36:57,712 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.491e+02 1.807e+02 2.344e+02 4.842e+02, threshold=3.613e+02, percent-clipped=5.0 2023-03-26 12:37:09,488 INFO [finetune.py:976] (6/7) Epoch 10, batch 5500, loss[loss=0.2028, simple_loss=0.2568, pruned_loss=0.07438, over 4764.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2556, pruned_loss=0.06418, over 953055.20 frames. ], batch size: 27, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:37:12,668 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57054.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:37:13,846 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57056.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:37:22,457 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 12:37:42,203 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.8158, 4.2129, 4.4097, 4.5872, 4.5617, 4.2614, 4.9334, 1.4507], device='cuda:6'), covar=tensor([0.0730, 0.0808, 0.0672, 0.0943, 0.1214, 0.1468, 0.0586, 0.5828], device='cuda:6'), in_proj_covar=tensor([0.0352, 0.0248, 0.0279, 0.0293, 0.0336, 0.0288, 0.0303, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:37:43,351 INFO [finetune.py:976] (6/7) Epoch 10, batch 5550, loss[loss=0.1856, simple_loss=0.2443, pruned_loss=0.06343, over 4787.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2578, pruned_loss=0.06498, over 953517.23 frames. ], batch size: 26, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:38:06,740 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.869e+01 1.587e+02 1.788e+02 2.090e+02 3.209e+02, threshold=3.576e+02, percent-clipped=0.0 2023-03-26 12:38:25,601 INFO [finetune.py:976] (6/7) Epoch 10, batch 5600, loss[loss=0.2322, simple_loss=0.2752, pruned_loss=0.09455, over 4822.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2618, pruned_loss=0.06602, over 952951.78 frames. ], batch size: 30, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:38:33,398 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 12:38:35,047 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57159.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:38:46,656 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57179.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:38:50,750 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:38:53,069 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2249, 2.5140, 3.3671, 2.4234, 3.2765, 3.3808, 2.5936, 3.4562], device='cuda:6'), covar=tensor([0.1108, 0.1921, 0.1145, 0.1709, 0.0734, 0.1303, 0.2326, 0.0844], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0207, 0.0195, 0.0192, 0.0178, 0.0217, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:38:57,563 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57197.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:38:58,646 INFO [finetune.py:976] (6/7) Epoch 10, batch 5650, loss[loss=0.2274, simple_loss=0.2935, pruned_loss=0.08059, over 4922.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2636, pruned_loss=0.06592, over 952385.24 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 64.0 2023-03-26 12:39:00,950 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 12:39:15,289 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57227.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 12:39:15,952 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9029, 2.5852, 2.3835, 3.0047, 2.5034, 2.6474, 2.4081, 3.5637], device='cuda:6'), covar=tensor([0.4081, 0.5035, 0.3584, 0.4196, 0.4337, 0.2501, 0.5183, 0.1532], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0258, 0.0223, 0.0278, 0.0243, 0.0209, 0.0245, 0.0213], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:39:19,290 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.551e+02 1.804e+02 2.162e+02 3.713e+02, threshold=3.608e+02, percent-clipped=1.0 2023-03-26 12:39:25,361 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57244.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:39:27,127 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57247.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:39:28,226 INFO [finetune.py:976] (6/7) Epoch 10, batch 5700, loss[loss=0.1765, simple_loss=0.2419, pruned_loss=0.05552, over 4125.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2588, pruned_loss=0.06523, over 931168.64 frames. ], batch size: 17, lr: 3.72e-03, grad_scale: 32.0 2023-03-26 12:39:34,005 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57258.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 12:39:36,654 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3127, 1.8006, 2.2305, 2.1557, 2.0060, 1.9513, 2.0982, 2.0897], device='cuda:6'), covar=tensor([0.3194, 0.3604, 0.3118, 0.3463, 0.4561, 0.3388, 0.4371, 0.3058], device='cuda:6'), in_proj_covar=tensor([0.0235, 0.0238, 0.0252, 0.0256, 0.0251, 0.0228, 0.0273, 0.0229], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:39:37,766 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57264.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:39:39,587 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6099, 2.4925, 2.5870, 1.9626, 2.5498, 2.6657, 2.6317, 2.2227], device='cuda:6'), covar=tensor([0.0529, 0.0597, 0.0718, 0.0817, 0.0698, 0.0681, 0.0644, 0.0975], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0135, 0.0145, 0.0127, 0.0121, 0.0145, 0.0146, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:40:00,692 INFO [finetune.py:976] (6/7) Epoch 11, batch 0, loss[loss=0.1994, simple_loss=0.269, pruned_loss=0.06492, over 4899.00 frames. ], tot_loss[loss=0.1994, simple_loss=0.269, pruned_loss=0.06492, over 4899.00 frames. ], batch size: 46, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:40:00,693 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 12:40:16,055 INFO [finetune.py:1010] (6/7) Epoch 11, validation: loss=0.1597, simple_loss=0.2306, pruned_loss=0.04438, over 2265189.00 frames. 2023-03-26 12:40:16,056 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 12:40:37,200 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57305.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:40:45,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0893, 1.0008, 0.9975, 0.3531, 0.8736, 1.1688, 1.1799, 0.9745], device='cuda:6'), covar=tensor([0.0855, 0.0506, 0.0454, 0.0536, 0.0493, 0.0578, 0.0336, 0.0662], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0152, 0.0120, 0.0132, 0.0130, 0.0123, 0.0142, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.4325e-05, 1.1183e-04, 8.6372e-05, 9.5568e-05, 9.2786e-05, 8.9576e-05, 1.0424e-04, 1.0590e-04], device='cuda:6') 2023-03-26 12:40:59,548 INFO [finetune.py:976] (6/7) Epoch 11, batch 50, loss[loss=0.178, simple_loss=0.2443, pruned_loss=0.0559, over 4815.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2607, pruned_loss=0.06453, over 215431.13 frames. ], batch size: 38, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:41:10,026 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.580e+02 1.868e+02 2.535e+02 4.204e+02, threshold=3.735e+02, percent-clipped=3.0 2023-03-26 12:41:18,573 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57349.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:41:19,796 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57351.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:41:38,107 INFO [finetune.py:976] (6/7) Epoch 11, batch 100, loss[loss=0.201, simple_loss=0.2538, pruned_loss=0.07412, over 4721.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2616, pruned_loss=0.06772, over 382298.72 frames. ], batch size: 59, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:41:46,599 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3324, 2.4693, 2.3413, 1.8188, 2.2779, 2.7542, 2.5965, 2.1375], device='cuda:6'), covar=tensor([0.0593, 0.0582, 0.0760, 0.0835, 0.0978, 0.0621, 0.0558, 0.0997], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0134, 0.0144, 0.0125, 0.0120, 0.0144, 0.0145, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:41:51,433 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57398.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:11,568 INFO [finetune.py:976] (6/7) Epoch 11, batch 150, loss[loss=0.2321, simple_loss=0.2966, pruned_loss=0.08382, over 4830.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2578, pruned_loss=0.06636, over 509762.34 frames. ], batch size: 39, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:42:16,966 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.728e+02 2.070e+02 2.489e+02 4.280e+02, threshold=4.140e+02, percent-clipped=3.0 2023-03-26 12:42:23,480 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 12:42:31,674 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:31,698 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:33,498 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57462.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:42:35,985 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4749, 3.8671, 4.0446, 4.2578, 4.2251, 3.9253, 4.5370, 1.4172], device='cuda:6'), covar=tensor([0.0733, 0.0853, 0.0794, 0.1051, 0.1274, 0.1642, 0.0685, 0.5728], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0245, 0.0276, 0.0290, 0.0331, 0.0285, 0.0301, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:42:36,015 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6189, 1.5698, 1.5424, 1.6449, 0.9507, 3.6577, 1.4970, 2.2569], device='cuda:6'), covar=tensor([0.3459, 0.2581, 0.2118, 0.2326, 0.2056, 0.0163, 0.2665, 0.1134], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0122, 0.0115, 0.0098, 0.0099, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 12:42:44,009 INFO [finetune.py:976] (6/7) Epoch 11, batch 200, loss[loss=0.2068, simple_loss=0.2821, pruned_loss=0.06574, over 4873.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2563, pruned_loss=0.06535, over 609159.25 frames. ], batch size: 34, lr: 3.72e-03, grad_scale: 16.0 2023-03-26 12:42:52,811 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7750, 1.5710, 1.4041, 1.2461, 1.5499, 1.5516, 1.5129, 2.1004], device='cuda:6'), covar=tensor([0.4461, 0.4148, 0.3427, 0.3920, 0.4015, 0.2450, 0.4071, 0.1888], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0225, 0.0281, 0.0245, 0.0211, 0.0249, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:42:54,059 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.50 vs. limit=5.0 2023-03-26 12:42:56,409 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:03,709 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57507.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:14,466 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57523.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:17,306 INFO [finetune.py:976] (6/7) Epoch 11, batch 250, loss[loss=0.2244, simple_loss=0.291, pruned_loss=0.07893, over 4831.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2617, pruned_loss=0.06732, over 687391.02 frames. ], batch size: 47, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:43:22,637 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.287e+02 1.584e+02 1.966e+02 2.356e+02 4.681e+02, threshold=3.932e+02, percent-clipped=1.0 2023-03-26 12:43:26,176 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2621, 4.7646, 4.5039, 2.8706, 4.8284, 3.7652, 0.8874, 3.2365], device='cuda:6'), covar=tensor([0.2064, 0.1746, 0.1413, 0.2832, 0.0921, 0.0806, 0.4717, 0.1400], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0173, 0.0158, 0.0128, 0.0155, 0.0121, 0.0144, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:43:27,970 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57542.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:43,775 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57553.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 12:43:45,618 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57556.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:43:46,874 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0122, 1.4240, 1.9785, 1.9519, 1.7691, 1.6984, 1.8981, 1.8393], device='cuda:6'), covar=tensor([0.4534, 0.4910, 0.4018, 0.4044, 0.5475, 0.4197, 0.5132, 0.3813], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0239, 0.0254, 0.0257, 0.0252, 0.0229, 0.0274, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:43:53,993 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1291, 2.7519, 2.9017, 3.0170, 2.9479, 2.7420, 3.1416, 0.9955], device='cuda:6'), covar=tensor([0.1096, 0.1118, 0.1154, 0.1262, 0.1554, 0.1879, 0.1052, 0.4831], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0247, 0.0277, 0.0292, 0.0333, 0.0287, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:43:55,220 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57564.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:08,373 INFO [finetune.py:976] (6/7) Epoch 11, batch 300, loss[loss=0.1873, simple_loss=0.2444, pruned_loss=0.06509, over 4286.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2636, pruned_loss=0.06742, over 747077.25 frames. ], batch size: 19, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:44:09,319 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-26 12:44:18,963 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8055, 3.2934, 3.2122, 1.7909, 3.4973, 2.6662, 1.4297, 2.3082], device='cuda:6'), covar=tensor([0.2874, 0.2105, 0.1534, 0.2945, 0.1118, 0.0910, 0.3308, 0.1455], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0173, 0.0159, 0.0128, 0.0156, 0.0121, 0.0144, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:44:24,434 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57600.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:25,769 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 12:44:31,734 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57612.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:40,866 INFO [finetune.py:976] (6/7) Epoch 11, batch 350, loss[loss=0.1708, simple_loss=0.2495, pruned_loss=0.04603, over 4729.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2644, pruned_loss=0.0674, over 789950.09 frames. ], batch size: 54, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:44:46,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.573e+02 1.819e+02 2.403e+02 4.156e+02, threshold=3.639e+02, percent-clipped=1.0 2023-03-26 12:44:56,795 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:44:58,444 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57651.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:14,035 INFO [finetune.py:976] (6/7) Epoch 11, batch 400, loss[loss=0.2045, simple_loss=0.2753, pruned_loss=0.06691, over 4913.00 frames. ], tot_loss[loss=0.2008, simple_loss=0.2665, pruned_loss=0.06759, over 828248.64 frames. ], batch size: 38, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:45:26,787 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8932, 1.7110, 2.2149, 1.5612, 2.0817, 2.1412, 1.6061, 2.2829], device='cuda:6'), covar=tensor([0.1536, 0.1945, 0.1614, 0.1986, 0.0996, 0.1431, 0.2787, 0.0963], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0206, 0.0194, 0.0192, 0.0178, 0.0216, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:45:30,902 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57697.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:32,135 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:46,095 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1719, 2.0194, 1.6785, 2.0916, 2.1416, 1.8249, 2.4466, 2.1403], device='cuda:6'), covar=tensor([0.1364, 0.2443, 0.3423, 0.2737, 0.2626, 0.1708, 0.3453, 0.1883], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0188, 0.0234, 0.0255, 0.0240, 0.0197, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:45:49,608 INFO [finetune.py:976] (6/7) Epoch 11, batch 450, loss[loss=0.1674, simple_loss=0.2352, pruned_loss=0.04979, over 4867.00 frames. ], tot_loss[loss=0.1987, simple_loss=0.2642, pruned_loss=0.06662, over 855654.67 frames. ], batch size: 34, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:45:51,644 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 12:45:53,858 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=57733.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:45:55,473 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.602e+02 1.902e+02 2.220e+02 3.989e+02, threshold=3.804e+02, percent-clipped=2.0 2023-03-26 12:46:15,574 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57754.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:46:32,835 INFO [finetune.py:976] (6/7) Epoch 11, batch 500, loss[loss=0.1941, simple_loss=0.2459, pruned_loss=0.07118, over 4760.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2616, pruned_loss=0.06606, over 878389.05 frames. ], batch size: 26, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:46:36,679 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3286, 2.2135, 1.7165, 2.1812, 2.2608, 1.9500, 2.5962, 2.3001], device='cuda:6'), covar=tensor([0.1370, 0.2563, 0.3519, 0.3107, 0.2845, 0.1885, 0.3492, 0.1888], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0188, 0.0234, 0.0255, 0.0240, 0.0196, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:46:44,584 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1456, 2.1847, 2.2640, 1.5298, 2.3932, 2.4723, 2.3128, 1.9708], device='cuda:6'), covar=tensor([0.0637, 0.0600, 0.0694, 0.0917, 0.0539, 0.0575, 0.0608, 0.0968], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0133, 0.0143, 0.0125, 0.0120, 0.0143, 0.0144, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:46:45,715 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=57794.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:46:50,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5157, 1.4100, 1.3363, 1.4263, 1.8304, 1.6375, 1.5606, 1.3142], device='cuda:6'), covar=tensor([0.0305, 0.0264, 0.0538, 0.0287, 0.0185, 0.0461, 0.0280, 0.0359], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0108, 0.0138, 0.0113, 0.0100, 0.0102, 0.0092, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0581e-05, 8.4258e-05, 1.0973e-04, 8.8673e-05, 7.8363e-05, 7.5950e-05, 6.9677e-05, 8.2137e-05], device='cuda:6') 2023-03-26 12:47:01,348 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57818.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:06,699 INFO [finetune.py:976] (6/7) Epoch 11, batch 550, loss[loss=0.2192, simple_loss=0.2716, pruned_loss=0.08342, over 4851.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2597, pruned_loss=0.0664, over 896996.24 frames. ], batch size: 49, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:47:11,537 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.185e+02 1.635e+02 1.936e+02 2.160e+02 3.511e+02, threshold=3.871e+02, percent-clipped=0.0 2023-03-26 12:47:16,786 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57842.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:23,741 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=57851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:24,997 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57853.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 12:47:28,015 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3582, 2.8726, 2.7645, 1.2175, 2.9517, 2.1106, 0.7457, 1.8741], device='cuda:6'), covar=tensor([0.2582, 0.2746, 0.1940, 0.3666, 0.1636, 0.1253, 0.4287, 0.1668], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0173, 0.0159, 0.0127, 0.0156, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:47:40,101 INFO [finetune.py:976] (6/7) Epoch 11, batch 600, loss[loss=0.233, simple_loss=0.3089, pruned_loss=0.07858, over 4907.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2601, pruned_loss=0.06667, over 909913.05 frames. ], batch size: 37, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:47:48,475 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57890.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:56,177 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=57900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:47:56,761 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57901.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:48:13,626 INFO [finetune.py:976] (6/7) Epoch 11, batch 650, loss[loss=0.2487, simple_loss=0.3178, pruned_loss=0.0898, over 4812.00 frames. ], tot_loss[loss=0.1983, simple_loss=0.2628, pruned_loss=0.06688, over 919696.13 frames. ], batch size: 39, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:48:18,497 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.568e+02 1.897e+02 2.360e+02 4.682e+02, threshold=3.793e+02, percent-clipped=3.0 2023-03-26 12:48:27,942 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=57948.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:48:30,327 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8273, 1.6581, 1.5358, 1.8648, 2.2090, 1.9788, 1.4060, 1.4362], device='cuda:6'), covar=tensor([0.1989, 0.1979, 0.1872, 0.1526, 0.1715, 0.1075, 0.2495, 0.1891], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0208, 0.0208, 0.0189, 0.0242, 0.0181, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:48:48,747 INFO [finetune.py:976] (6/7) Epoch 11, batch 700, loss[loss=0.1904, simple_loss=0.2612, pruned_loss=0.05983, over 4856.00 frames. ], tot_loss[loss=0.2006, simple_loss=0.2657, pruned_loss=0.0678, over 929815.89 frames. ], batch size: 31, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:49:01,309 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9065, 1.8395, 1.5137, 1.9176, 1.9456, 1.6142, 2.1458, 1.9007], device='cuda:6'), covar=tensor([0.1273, 0.2171, 0.2955, 0.2205, 0.2308, 0.1635, 0.3265, 0.1870], device='cuda:6'), in_proj_covar=tensor([0.0175, 0.0188, 0.0233, 0.0254, 0.0240, 0.0196, 0.0212, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:49:44,562 INFO [finetune.py:976] (6/7) Epoch 11, batch 750, loss[loss=0.2184, simple_loss=0.2993, pruned_loss=0.06869, over 4696.00 frames. ], tot_loss[loss=0.2037, simple_loss=0.2685, pruned_loss=0.06946, over 935773.65 frames. ], batch size: 54, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:49:49,411 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.116e+01 1.579e+02 1.894e+02 2.321e+02 4.436e+02, threshold=3.789e+02, percent-clipped=3.0 2023-03-26 12:50:02,177 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58054.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:18,111 INFO [finetune.py:976] (6/7) Epoch 11, batch 800, loss[loss=0.2126, simple_loss=0.2767, pruned_loss=0.07425, over 4823.00 frames. ], tot_loss[loss=0.2019, simple_loss=0.2673, pruned_loss=0.06821, over 940578.76 frames. ], batch size: 33, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:50:25,466 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58089.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:33,871 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58102.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:45,549 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:50:51,411 INFO [finetune.py:976] (6/7) Epoch 11, batch 850, loss[loss=0.2248, simple_loss=0.2738, pruned_loss=0.08786, over 4800.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2639, pruned_loss=0.06689, over 943880.20 frames. ], batch size: 45, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:50:56,226 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.394e+01 1.505e+02 1.749e+02 2.082e+02 4.545e+02, threshold=3.498e+02, percent-clipped=2.0 2023-03-26 12:50:56,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4921, 2.2164, 1.9617, 1.0306, 2.1634, 1.9543, 1.8260, 2.1361], device='cuda:6'), covar=tensor([0.0935, 0.0949, 0.1651, 0.2049, 0.1595, 0.2231, 0.2113, 0.0990], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0201, 0.0202, 0.0188, 0.0216, 0.0209, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:50:59,937 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58141.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:01,775 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:05,921 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58151.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:09,347 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6166, 2.2537, 1.9459, 0.9635, 2.1190, 1.9783, 1.9304, 2.1099], device='cuda:6'), covar=tensor([0.0888, 0.1067, 0.1803, 0.2259, 0.1642, 0.2222, 0.2033, 0.1143], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0201, 0.0202, 0.0188, 0.0216, 0.0208, 0.0223, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:51:23,204 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:33,621 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1044, 1.7622, 2.5154, 4.0226, 2.7148, 2.7901, 1.0834, 3.2318], device='cuda:6'), covar=tensor([0.1767, 0.1504, 0.1310, 0.0545, 0.0779, 0.1585, 0.1914, 0.0479], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:51:35,956 INFO [finetune.py:976] (6/7) Epoch 11, batch 900, loss[loss=0.171, simple_loss=0.2347, pruned_loss=0.05366, over 4899.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2605, pruned_loss=0.06588, over 947853.22 frames. ], batch size: 35, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:51:57,410 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:51:59,479 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58202.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:52:01,304 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58205.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:52:12,172 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6355, 1.5444, 1.4888, 1.5854, 1.0502, 3.3433, 1.3903, 1.8843], device='cuda:6'), covar=tensor([0.3326, 0.2345, 0.2162, 0.2275, 0.1908, 0.0218, 0.2729, 0.1292], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0123, 0.0115, 0.0098, 0.0099, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 12:52:17,503 INFO [finetune.py:976] (6/7) Epoch 11, batch 950, loss[loss=0.1926, simple_loss=0.2455, pruned_loss=0.06982, over 4826.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2575, pruned_loss=0.06464, over 948658.01 frames. ], batch size: 25, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:52:22,879 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.516e+02 1.975e+02 2.310e+02 4.008e+02, threshold=3.950e+02, percent-clipped=1.0 2023-03-26 12:52:40,827 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 12:52:47,907 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6771, 1.3473, 2.1159, 1.3770, 1.7649, 1.9168, 1.3154, 2.0435], device='cuda:6'), covar=tensor([0.1534, 0.2366, 0.1158, 0.1944, 0.1122, 0.1503, 0.2967, 0.1002], device='cuda:6'), in_proj_covar=tensor([0.0200, 0.0207, 0.0196, 0.0193, 0.0180, 0.0217, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:52:51,454 INFO [finetune.py:976] (6/7) Epoch 11, batch 1000, loss[loss=0.1951, simple_loss=0.2495, pruned_loss=0.07032, over 4822.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2593, pruned_loss=0.06514, over 949781.98 frames. ], batch size: 30, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:53:46,405 INFO [finetune.py:976] (6/7) Epoch 11, batch 1050, loss[loss=0.2151, simple_loss=0.2864, pruned_loss=0.07187, over 4816.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2619, pruned_loss=0.06549, over 952477.88 frames. ], batch size: 38, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:53:51,319 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.197e+02 1.617e+02 2.003e+02 2.375e+02 3.670e+02, threshold=4.006e+02, percent-clipped=0.0 2023-03-26 12:54:09,345 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4471, 1.3220, 1.9111, 2.8784, 1.8738, 2.0109, 1.2217, 2.2158], device='cuda:6'), covar=tensor([0.1698, 0.1509, 0.1173, 0.0596, 0.0895, 0.1578, 0.1459, 0.0632], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0133, 0.0163, 0.0101, 0.0138, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 12:54:22,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7564, 1.6161, 2.0758, 1.9959, 1.8072, 4.3231, 1.5982, 1.7826], device='cuda:6'), covar=tensor([0.1019, 0.1843, 0.1319, 0.1001, 0.1642, 0.0260, 0.1470, 0.1873], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 12:54:42,548 INFO [finetune.py:976] (6/7) Epoch 11, batch 1100, loss[loss=0.2201, simple_loss=0.2946, pruned_loss=0.07282, over 4758.00 frames. ], tot_loss[loss=0.1984, simple_loss=0.2639, pruned_loss=0.06643, over 953562.27 frames. ], batch size: 59, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:54:55,683 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58389.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:55:35,372 INFO [finetune.py:976] (6/7) Epoch 11, batch 1150, loss[loss=0.2316, simple_loss=0.2926, pruned_loss=0.08533, over 4759.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2648, pruned_loss=0.06664, over 954024.17 frames. ], batch size: 27, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:55:40,654 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.672e+02 1.870e+02 2.321e+02 4.403e+02, threshold=3.740e+02, percent-clipped=1.0 2023-03-26 12:55:41,943 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58437.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:56:08,439 INFO [finetune.py:976] (6/7) Epoch 11, batch 1200, loss[loss=0.1905, simple_loss=0.24, pruned_loss=0.07047, over 4818.00 frames. ], tot_loss[loss=0.1978, simple_loss=0.2629, pruned_loss=0.06635, over 952898.06 frames. ], batch size: 30, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:56:21,458 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58497.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:56:23,266 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=58500.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:56:27,210 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 12:56:32,662 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 12:56:40,497 INFO [finetune.py:976] (6/7) Epoch 11, batch 1250, loss[loss=0.1525, simple_loss=0.2289, pruned_loss=0.03809, over 4755.00 frames. ], tot_loss[loss=0.1969, simple_loss=0.2615, pruned_loss=0.06618, over 954166.74 frames. ], batch size: 26, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:56:46,792 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.332e+01 1.581e+02 1.822e+02 2.261e+02 4.369e+02, threshold=3.644e+02, percent-clipped=3.0 2023-03-26 12:56:49,325 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0014, 2.0140, 2.0958, 1.7165, 2.0552, 2.3143, 2.2042, 1.8062], device='cuda:6'), covar=tensor([0.0459, 0.0446, 0.0575, 0.0691, 0.0755, 0.0448, 0.0437, 0.0836], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0133, 0.0142, 0.0124, 0.0120, 0.0143, 0.0143, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:57:07,712 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7104, 3.6261, 3.4495, 1.8583, 3.7086, 2.7594, 0.7926, 2.5918], device='cuda:6'), covar=tensor([0.2441, 0.2017, 0.1549, 0.3055, 0.1239, 0.1039, 0.4370, 0.1401], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0173, 0.0158, 0.0128, 0.0155, 0.0120, 0.0144, 0.0120], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:57:15,452 INFO [finetune.py:976] (6/7) Epoch 11, batch 1300, loss[loss=0.1691, simple_loss=0.2437, pruned_loss=0.04731, over 4832.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2584, pruned_loss=0.06474, over 952498.63 frames. ], batch size: 30, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:57:17,840 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4863, 3.8640, 4.0912, 4.3231, 4.2383, 4.0071, 4.5718, 1.5223], device='cuda:6'), covar=tensor([0.0719, 0.0780, 0.0788, 0.0950, 0.1199, 0.1323, 0.0564, 0.5340], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0278, 0.0292, 0.0332, 0.0285, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:57:23,113 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 12:57:28,407 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2999, 3.7704, 3.9340, 4.1235, 4.0589, 3.8379, 4.4489, 1.4612], device='cuda:6'), covar=tensor([0.0903, 0.0836, 0.0760, 0.1122, 0.1341, 0.1558, 0.0626, 0.5786], device='cuda:6'), in_proj_covar=tensor([0.0352, 0.0245, 0.0278, 0.0293, 0.0333, 0.0285, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 12:57:48,896 INFO [finetune.py:976] (6/7) Epoch 11, batch 1350, loss[loss=0.1468, simple_loss=0.2103, pruned_loss=0.04165, over 4816.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2591, pruned_loss=0.06544, over 953669.89 frames. ], batch size: 25, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:57:54,729 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.484e+01 1.581e+02 1.914e+02 2.266e+02 4.857e+02, threshold=3.829e+02, percent-clipped=2.0 2023-03-26 12:58:05,362 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3431, 2.9306, 2.8342, 1.2310, 3.0626, 2.1551, 0.6422, 1.8709], device='cuda:6'), covar=tensor([0.2212, 0.2429, 0.1671, 0.3575, 0.1267, 0.1265, 0.4047, 0.1629], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0156, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 12:58:23,951 INFO [finetune.py:976] (6/7) Epoch 11, batch 1400, loss[loss=0.1851, simple_loss=0.2573, pruned_loss=0.05643, over 4941.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2621, pruned_loss=0.06595, over 952772.93 frames. ], batch size: 33, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:58:26,977 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58681.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:58:33,486 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 12:58:54,789 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 12:58:56,046 INFO [finetune.py:976] (6/7) Epoch 11, batch 1450, loss[loss=0.1919, simple_loss=0.2701, pruned_loss=0.05691, over 4813.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2632, pruned_loss=0.06557, over 953066.42 frames. ], batch size: 39, lr: 3.71e-03, grad_scale: 16.0 2023-03-26 12:59:01,960 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.669e+02 2.009e+02 2.318e+02 4.324e+02, threshold=4.017e+02, percent-clipped=1.0 2023-03-26 12:59:07,256 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58742.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:59:35,117 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58775.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 12:59:36,228 INFO [finetune.py:976] (6/7) Epoch 11, batch 1500, loss[loss=0.2186, simple_loss=0.2813, pruned_loss=0.07799, over 4859.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2641, pruned_loss=0.06594, over 954443.28 frames. ], batch size: 34, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 12:59:48,135 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.86 vs. limit=5.0 2023-03-26 12:59:58,353 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58797.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:04,305 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=58800.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:34,188 INFO [finetune.py:976] (6/7) Epoch 11, batch 1550, loss[loss=0.1547, simple_loss=0.218, pruned_loss=0.04569, over 4829.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2624, pruned_loss=0.0646, over 955371.98 frames. ], batch size: 25, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:00:39,953 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.185e+02 1.569e+02 1.959e+02 2.197e+02 4.059e+02, threshold=3.918e+02, percent-clipped=1.0 2023-03-26 13:00:41,223 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58836.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:47,624 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58845.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:00:49,963 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=58848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:01:07,921 INFO [finetune.py:976] (6/7) Epoch 11, batch 1600, loss[loss=0.2166, simple_loss=0.269, pruned_loss=0.08213, over 4827.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.26, pruned_loss=0.06388, over 957203.11 frames. ], batch size: 33, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:01:28,381 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=58906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:01:50,384 INFO [finetune.py:976] (6/7) Epoch 11, batch 1650, loss[loss=0.1852, simple_loss=0.2473, pruned_loss=0.0615, over 4819.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2574, pruned_loss=0.06328, over 957149.27 frames. ], batch size: 39, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:01:55,258 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.664e+02 1.923e+02 2.390e+02 4.121e+02, threshold=3.846e+02, percent-clipped=1.0 2023-03-26 13:02:18,248 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=58967.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:02:24,168 INFO [finetune.py:976] (6/7) Epoch 11, batch 1700, loss[loss=0.2065, simple_loss=0.2747, pruned_loss=0.0692, over 4835.00 frames. ], tot_loss[loss=0.1906, simple_loss=0.2552, pruned_loss=0.06295, over 957521.16 frames. ], batch size: 47, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:02:57,897 INFO [finetune.py:976] (6/7) Epoch 11, batch 1750, loss[loss=0.2959, simple_loss=0.3406, pruned_loss=0.1256, over 4202.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2574, pruned_loss=0.06405, over 954646.50 frames. ], batch size: 65, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:03:02,754 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.140e+02 1.620e+02 1.895e+02 2.249e+02 5.052e+02, threshold=3.790e+02, percent-clipped=2.0 2023-03-26 13:03:04,075 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59037.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:03:04,776 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8186, 1.7607, 1.5928, 1.9664, 2.3441, 2.0811, 1.4323, 1.4104], device='cuda:6'), covar=tensor([0.2500, 0.2228, 0.2043, 0.1786, 0.1969, 0.1154, 0.2695, 0.2190], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0209, 0.0209, 0.0190, 0.0243, 0.0183, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:03:33,672 INFO [finetune.py:976] (6/7) Epoch 11, batch 1800, loss[loss=0.2206, simple_loss=0.2818, pruned_loss=0.07968, over 4772.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2609, pruned_loss=0.06465, over 954856.21 frames. ], batch size: 29, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:04:19,596 INFO [finetune.py:976] (6/7) Epoch 11, batch 1850, loss[loss=0.1812, simple_loss=0.2431, pruned_loss=0.05966, over 4756.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2624, pruned_loss=0.06492, over 954219.78 frames. ], batch size: 26, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:04:22,107 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59131.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:04:24,432 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.037e+02 1.668e+02 2.065e+02 2.636e+02 4.490e+02, threshold=4.130e+02, percent-clipped=5.0 2023-03-26 13:04:43,992 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59164.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:04:57,370 INFO [finetune.py:976] (6/7) Epoch 11, batch 1900, loss[loss=0.2246, simple_loss=0.2849, pruned_loss=0.08212, over 4186.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2643, pruned_loss=0.06608, over 953375.98 frames. ], batch size: 65, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:05:20,999 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3819, 1.2288, 1.6014, 1.0897, 1.2299, 1.4009, 1.2031, 1.5538], device='cuda:6'), covar=tensor([0.1238, 0.2292, 0.1180, 0.1418, 0.1075, 0.1305, 0.2789, 0.0983], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0206, 0.0194, 0.0190, 0.0179, 0.0216, 0.0218, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:05:46,838 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:05:47,936 INFO [finetune.py:976] (6/7) Epoch 11, batch 1950, loss[loss=0.174, simple_loss=0.2521, pruned_loss=0.04794, over 4768.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2625, pruned_loss=0.06542, over 952598.55 frames. ], batch size: 26, lr: 3.70e-03, grad_scale: 16.0 2023-03-26 13:05:59,308 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.570e+02 1.817e+02 2.294e+02 4.310e+02, threshold=3.633e+02, percent-clipped=1.0 2023-03-26 13:06:20,685 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8142, 1.6090, 1.4573, 1.2587, 1.5932, 1.5448, 1.5660, 2.1476], device='cuda:6'), covar=tensor([0.4198, 0.4563, 0.3281, 0.4119, 0.4106, 0.2462, 0.3893, 0.1789], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0262, 0.0226, 0.0281, 0.0244, 0.0212, 0.0248, 0.0217], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:06:30,345 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59262.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:06:51,938 INFO [finetune.py:976] (6/7) Epoch 11, batch 2000, loss[loss=0.1838, simple_loss=0.2505, pruned_loss=0.05852, over 4744.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2598, pruned_loss=0.06445, over 954003.67 frames. ], batch size: 26, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:07:03,167 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5642, 1.4786, 1.8931, 2.9403, 1.9748, 2.1456, 0.9897, 2.3759], device='cuda:6'), covar=tensor([0.1794, 0.1415, 0.1260, 0.0602, 0.0864, 0.1304, 0.1815, 0.0625], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0100, 0.0136, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:07:37,498 INFO [finetune.py:976] (6/7) Epoch 11, batch 2050, loss[loss=0.1621, simple_loss=0.2354, pruned_loss=0.04439, over 4749.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2556, pruned_loss=0.06269, over 953461.71 frames. ], batch size: 27, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:07:42,276 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.122e+01 1.513e+02 1.843e+02 2.174e+02 3.611e+02, threshold=3.686e+02, percent-clipped=0.0 2023-03-26 13:07:50,359 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:08:12,628 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3092, 1.3165, 1.5665, 1.0670, 1.1951, 1.4210, 1.2854, 1.5925], device='cuda:6'), covar=tensor([0.1124, 0.2136, 0.1229, 0.1547, 0.0942, 0.1355, 0.2853, 0.0858], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0207, 0.0194, 0.0192, 0.0179, 0.0216, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:08:17,285 INFO [finetune.py:976] (6/7) Epoch 11, batch 2100, loss[loss=0.2283, simple_loss=0.2942, pruned_loss=0.08118, over 4829.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2564, pruned_loss=0.06352, over 953779.96 frames. ], batch size: 33, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:08:27,934 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:09:08,919 INFO [finetune.py:976] (6/7) Epoch 11, batch 2150, loss[loss=0.2127, simple_loss=0.2773, pruned_loss=0.074, over 4814.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2604, pruned_loss=0.06492, over 954765.49 frames. ], batch size: 39, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:09:12,220 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-26 13:09:12,443 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 13:09:13,327 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59431.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:09:15,687 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.234e+02 1.596e+02 1.893e+02 2.254e+02 5.168e+02, threshold=3.786e+02, percent-clipped=3.0 2023-03-26 13:09:18,334 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-26 13:09:54,868 INFO [finetune.py:976] (6/7) Epoch 11, batch 2200, loss[loss=0.2005, simple_loss=0.2698, pruned_loss=0.06561, over 4897.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2624, pruned_loss=0.06489, over 956044.58 frames. ], batch size: 36, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:09:56,665 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59479.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:10:02,358 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-26 13:10:13,084 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59502.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:10:17,057 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 13:10:25,070 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59520.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:10:26,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7423, 1.6925, 1.4671, 1.6231, 2.1462, 1.8459, 1.7349, 1.4852], device='cuda:6'), covar=tensor([0.0269, 0.0294, 0.0507, 0.0269, 0.0151, 0.0452, 0.0294, 0.0391], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0110, 0.0140, 0.0114, 0.0102, 0.0104, 0.0093, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.0906e-05, 8.5386e-05, 1.1164e-04, 8.9545e-05, 7.9849e-05, 7.6957e-05, 7.0651e-05, 8.3396e-05], device='cuda:6') 2023-03-26 13:10:30,587 INFO [finetune.py:976] (6/7) Epoch 11, batch 2250, loss[loss=0.1614, simple_loss=0.2234, pruned_loss=0.04965, over 4739.00 frames. ], tot_loss[loss=0.1985, simple_loss=0.2647, pruned_loss=0.0662, over 957739.49 frames. ], batch size: 23, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:10:37,560 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.729e+02 2.023e+02 2.518e+02 3.990e+02, threshold=4.047e+02, percent-clipped=2.0 2023-03-26 13:10:57,865 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3373, 1.2975, 1.5076, 1.0781, 1.2820, 1.4524, 1.2595, 1.6022], device='cuda:6'), covar=tensor([0.1178, 0.2102, 0.1268, 0.1583, 0.0974, 0.1253, 0.2944, 0.0767], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0206, 0.0193, 0.0190, 0.0178, 0.0214, 0.0217, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:11:02,694 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59562.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:11:03,350 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:11:13,472 INFO [finetune.py:976] (6/7) Epoch 11, batch 2300, loss[loss=0.1612, simple_loss=0.2362, pruned_loss=0.04311, over 4801.00 frames. ], tot_loss[loss=0.1996, simple_loss=0.2661, pruned_loss=0.06651, over 956118.58 frames. ], batch size: 25, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:11:35,301 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59610.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:11:47,098 INFO [finetune.py:976] (6/7) Epoch 11, batch 2350, loss[loss=0.1804, simple_loss=0.2494, pruned_loss=0.05569, over 4882.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2626, pruned_loss=0.06493, over 955688.16 frames. ], batch size: 32, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:11:52,462 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.309e+01 1.451e+02 1.728e+02 2.097e+02 4.600e+02, threshold=3.455e+02, percent-clipped=1.0 2023-03-26 13:11:56,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6303, 1.1431, 0.8814, 1.5707, 2.0641, 1.0921, 1.3906, 1.4742], device='cuda:6'), covar=tensor([0.1437, 0.2094, 0.1937, 0.1200, 0.1895, 0.1987, 0.1478, 0.1950], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0094, 0.0121, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:12:04,633 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7761, 1.7449, 1.5381, 1.6539, 2.0783, 1.8840, 1.7628, 1.5229], device='cuda:6'), covar=tensor([0.0257, 0.0248, 0.0556, 0.0288, 0.0180, 0.0474, 0.0292, 0.0400], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0109, 0.0140, 0.0114, 0.0102, 0.0103, 0.0093, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.0626e-05, 8.4788e-05, 1.1103e-04, 8.9120e-05, 7.9463e-05, 7.6618e-05, 7.0151e-05, 8.2922e-05], device='cuda:6') 2023-03-26 13:12:19,961 INFO [finetune.py:976] (6/7) Epoch 11, batch 2400, loss[loss=0.1987, simple_loss=0.2576, pruned_loss=0.06984, over 4878.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.259, pruned_loss=0.06376, over 954404.83 frames. ], batch size: 34, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:12:53,270 INFO [finetune.py:976] (6/7) Epoch 11, batch 2450, loss[loss=0.2188, simple_loss=0.2723, pruned_loss=0.08262, over 4904.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2558, pruned_loss=0.0629, over 955438.07 frames. ], batch size: 32, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:13:01,212 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.641e+02 1.877e+02 2.149e+02 5.374e+02, threshold=3.753e+02, percent-clipped=2.0 2023-03-26 13:13:11,259 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1951, 2.0924, 1.7911, 2.2048, 2.0805, 2.0198, 2.0031, 2.9193], device='cuda:6'), covar=tensor([0.4343, 0.5721, 0.3743, 0.4976, 0.5205, 0.2801, 0.5211, 0.1823], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0261, 0.0224, 0.0280, 0.0244, 0.0211, 0.0247, 0.0217], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:13:22,713 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9548, 1.6155, 2.2079, 1.4474, 2.0116, 2.1805, 1.5481, 2.3598], device='cuda:6'), covar=tensor([0.1493, 0.2506, 0.1419, 0.2236, 0.0924, 0.1456, 0.3033, 0.0916], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0207, 0.0194, 0.0191, 0.0178, 0.0216, 0.0218, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:13:37,051 INFO [finetune.py:976] (6/7) Epoch 11, batch 2500, loss[loss=0.2456, simple_loss=0.3053, pruned_loss=0.09297, over 4802.00 frames. ], tot_loss[loss=0.194, simple_loss=0.259, pruned_loss=0.06451, over 955498.86 frames. ], batch size: 51, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:14:29,719 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=59820.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:14:33,894 INFO [finetune.py:976] (6/7) Epoch 11, batch 2550, loss[loss=0.276, simple_loss=0.3188, pruned_loss=0.1166, over 4276.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2628, pruned_loss=0.06531, over 951885.19 frames. ], batch size: 65, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:14:40,186 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.636e+02 1.885e+02 2.323e+02 4.849e+02, threshold=3.771e+02, percent-clipped=2.0 2023-03-26 13:14:49,251 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:14:57,195 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=59858.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:03,810 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=59868.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:08,730 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9019, 1.8752, 1.7245, 2.0419, 1.4268, 4.6604, 1.7887, 2.3282], device='cuda:6'), covar=tensor([0.3171, 0.2400, 0.2110, 0.2207, 0.1770, 0.0111, 0.2486, 0.1248], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0119, 0.0123, 0.0115, 0.0098, 0.0099, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:15:09,219 INFO [finetune.py:976] (6/7) Epoch 11, batch 2600, loss[loss=0.1791, simple_loss=0.238, pruned_loss=0.06014, over 4749.00 frames. ], tot_loss[loss=0.1972, simple_loss=0.2638, pruned_loss=0.06532, over 953691.09 frames. ], batch size: 27, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:15:15,204 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8758, 1.8378, 1.6053, 2.0276, 2.4788, 2.0698, 1.6262, 1.5094], device='cuda:6'), covar=tensor([0.2153, 0.1940, 0.1920, 0.1595, 0.1650, 0.1133, 0.2353, 0.1948], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0206, 0.0208, 0.0189, 0.0241, 0.0182, 0.0212, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:15:18,115 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=59889.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:31,228 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:15:42,446 INFO [finetune.py:976] (6/7) Epoch 11, batch 2650, loss[loss=0.163, simple_loss=0.2204, pruned_loss=0.05279, over 4698.00 frames. ], tot_loss[loss=0.1983, simple_loss=0.2649, pruned_loss=0.06582, over 953438.54 frames. ], batch size: 23, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:15:47,335 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.166e+02 1.549e+02 1.976e+02 2.444e+02 3.877e+02, threshold=3.952e+02, percent-clipped=1.0 2023-03-26 13:15:48,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9975, 1.6006, 2.4731, 3.8351, 2.7227, 2.6442, 0.8569, 3.0575], device='cuda:6'), covar=tensor([0.1847, 0.1692, 0.1367, 0.0529, 0.0799, 0.1692, 0.2085, 0.0542], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:16:03,045 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=59950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:16:29,562 INFO [finetune.py:976] (6/7) Epoch 11, batch 2700, loss[loss=0.1733, simple_loss=0.2299, pruned_loss=0.05832, over 4858.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2621, pruned_loss=0.0648, over 951522.24 frames. ], batch size: 31, lr: 3.70e-03, grad_scale: 32.0 2023-03-26 13:16:38,531 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8954, 1.6497, 2.3801, 1.5173, 2.0596, 2.1976, 1.6306, 2.3438], device='cuda:6'), covar=tensor([0.1358, 0.2052, 0.1385, 0.2111, 0.0838, 0.1448, 0.2813, 0.0739], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0206, 0.0194, 0.0190, 0.0178, 0.0215, 0.0218, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:16:42,155 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 13:17:04,323 INFO [finetune.py:976] (6/7) Epoch 11, batch 2750, loss[loss=0.1552, simple_loss=0.2238, pruned_loss=0.04328, over 4765.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2596, pruned_loss=0.06432, over 952131.81 frames. ], batch size: 28, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:17:09,207 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.603e+02 1.823e+02 2.284e+02 4.397e+02, threshold=3.646e+02, percent-clipped=1.0 2023-03-26 13:17:12,385 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:22,088 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:30,388 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60065.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:37,359 INFO [finetune.py:976] (6/7) Epoch 11, batch 2800, loss[loss=0.1896, simple_loss=0.2415, pruned_loss=0.06883, over 4832.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2554, pruned_loss=0.06277, over 951281.53 frames. ], batch size: 33, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:17:38,102 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:17:50,517 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9459, 1.4094, 0.9535, 1.8050, 2.2784, 1.6403, 1.8299, 1.6883], device='cuda:6'), covar=tensor([0.1434, 0.2169, 0.2135, 0.1200, 0.1850, 0.1837, 0.1373, 0.2119], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0098, 0.0115, 0.0094, 0.0121, 0.0096, 0.0101, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 13:17:54,539 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60101.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 13:18:02,764 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60113.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 13:18:10,627 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:18:11,110 INFO [finetune.py:976] (6/7) Epoch 11, batch 2850, loss[loss=0.2558, simple_loss=0.3099, pruned_loss=0.1009, over 4131.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2556, pruned_loss=0.06337, over 952114.60 frames. ], batch size: 65, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:18:13,101 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2214, 2.6994, 2.5791, 1.2885, 2.6918, 2.3614, 2.0348, 2.2663], device='cuda:6'), covar=tensor([0.0908, 0.1203, 0.1812, 0.2655, 0.2195, 0.2237, 0.2565, 0.1508], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0200, 0.0202, 0.0187, 0.0216, 0.0209, 0.0223, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:18:17,975 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.579e+02 1.818e+02 2.348e+02 4.165e+02, threshold=3.636e+02, percent-clipped=3.0 2023-03-26 13:18:18,487 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 13:18:21,030 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60139.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:18:39,246 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:18:51,666 INFO [finetune.py:976] (6/7) Epoch 11, batch 2900, loss[loss=0.2495, simple_loss=0.3177, pruned_loss=0.09067, over 4896.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2569, pruned_loss=0.06353, over 949433.04 frames. ], batch size: 35, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:19:12,031 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60198.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:19:21,318 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60204.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:19:27,579 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:19:50,823 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6975, 1.6230, 1.3999, 1.6062, 2.0067, 1.7582, 1.5794, 1.3534], device='cuda:6'), covar=tensor([0.0241, 0.0285, 0.0517, 0.0260, 0.0169, 0.0458, 0.0345, 0.0407], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0109, 0.0139, 0.0113, 0.0101, 0.0103, 0.0093, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.0583e-05, 8.4536e-05, 1.1092e-04, 8.8772e-05, 7.9144e-05, 7.6542e-05, 7.0327e-05, 8.2739e-05], device='cuda:6') 2023-03-26 13:19:51,304 INFO [finetune.py:976] (6/7) Epoch 11, batch 2950, loss[loss=0.2211, simple_loss=0.2824, pruned_loss=0.07994, over 4828.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2602, pruned_loss=0.06451, over 949869.77 frames. ], batch size: 30, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:20:00,143 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.723e+02 2.035e+02 2.444e+02 4.360e+02, threshold=4.070e+02, percent-clipped=6.0 2023-03-26 13:20:06,701 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1931, 3.5503, 3.8056, 4.0581, 3.9796, 3.7462, 4.2482, 1.4499], device='cuda:6'), covar=tensor([0.0701, 0.0819, 0.0801, 0.0923, 0.1027, 0.1262, 0.0656, 0.5278], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0245, 0.0277, 0.0295, 0.0331, 0.0286, 0.0304, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:20:06,703 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:20:16,636 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60259.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:20:24,325 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0097, 1.3582, 1.9500, 1.9320, 1.6895, 1.6101, 1.7763, 1.7591], device='cuda:6'), covar=tensor([0.3863, 0.4544, 0.3891, 0.4019, 0.5232, 0.4124, 0.4946, 0.3672], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0239, 0.0253, 0.0258, 0.0254, 0.0230, 0.0273, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:20:26,138 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4486, 1.3126, 1.2610, 1.3507, 1.6403, 1.4821, 1.3671, 1.2108], device='cuda:6'), covar=tensor([0.0260, 0.0274, 0.0563, 0.0291, 0.0209, 0.0468, 0.0289, 0.0375], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0109, 0.0139, 0.0114, 0.0101, 0.0103, 0.0093, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.0759e-05, 8.4623e-05, 1.1097e-04, 8.8893e-05, 7.9195e-05, 7.6549e-05, 7.0436e-05, 8.2779e-05], device='cuda:6') 2023-03-26 13:20:28,425 INFO [finetune.py:976] (6/7) Epoch 11, batch 3000, loss[loss=0.1901, simple_loss=0.2565, pruned_loss=0.06191, over 4761.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2605, pruned_loss=0.0638, over 952068.62 frames. ], batch size: 28, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:20:28,425 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 13:20:30,304 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4203, 1.5984, 1.5276, 1.6212, 1.5921, 2.9714, 1.4623, 1.5837], device='cuda:6'), covar=tensor([0.0937, 0.1640, 0.1050, 0.0931, 0.1526, 0.0332, 0.1354, 0.1602], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0081, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:20:33,407 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7373, 1.5532, 2.0046, 2.9121, 2.0363, 2.1956, 1.0885, 2.3659], device='cuda:6'), covar=tensor([0.1618, 0.1415, 0.1155, 0.0598, 0.0835, 0.1294, 0.1694, 0.0587], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:20:38,150 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8724, 3.4194, 3.5928, 3.8012, 3.6136, 3.4645, 3.9298, 1.2736], device='cuda:6'), covar=tensor([0.0875, 0.0936, 0.0878, 0.0912, 0.1554, 0.1648, 0.0865, 0.5357], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0277, 0.0295, 0.0333, 0.0288, 0.0305, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:20:38,898 INFO [finetune.py:1010] (6/7) Epoch 11, validation: loss=0.1572, simple_loss=0.2284, pruned_loss=0.04301, over 2265189.00 frames. 2023-03-26 13:20:38,898 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 13:20:48,873 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 13:20:57,689 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7549, 1.4817, 1.9115, 3.2574, 2.1433, 2.3833, 0.9730, 2.5925], device='cuda:6'), covar=tensor([0.1774, 0.1683, 0.1445, 0.0700, 0.0955, 0.1176, 0.2022, 0.0601], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0101, 0.0138, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:21:00,128 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.9610, 0.9234, 0.9162, 1.0962, 1.1951, 1.0969, 1.0055, 0.8944], device='cuda:6'), covar=tensor([0.0344, 0.0281, 0.0598, 0.0246, 0.0265, 0.0377, 0.0317, 0.0386], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0109, 0.0140, 0.0114, 0.0102, 0.0104, 0.0093, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.0883e-05, 8.4789e-05, 1.1115e-04, 8.9051e-05, 7.9379e-05, 7.6756e-05, 7.0684e-05, 8.3047e-05], device='cuda:6') 2023-03-26 13:21:10,750 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4881, 1.5884, 1.6515, 1.7511, 1.6120, 3.2214, 1.3867, 1.6828], device='cuda:6'), covar=tensor([0.0961, 0.1681, 0.1087, 0.0957, 0.1570, 0.0253, 0.1459, 0.1673], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0075, 0.0078, 0.0092, 0.0081, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:21:13,696 INFO [finetune.py:976] (6/7) Epoch 11, batch 3050, loss[loss=0.2077, simple_loss=0.2728, pruned_loss=0.07133, over 4823.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2617, pruned_loss=0.0643, over 952513.05 frames. ], batch size: 30, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:21:19,478 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.587e+02 1.939e+02 2.482e+02 4.597e+02, threshold=3.877e+02, percent-clipped=2.0 2023-03-26 13:21:22,661 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1678, 1.4940, 0.7030, 2.0724, 2.4482, 1.7216, 1.9153, 1.9567], device='cuda:6'), covar=tensor([0.1469, 0.1990, 0.2187, 0.1158, 0.1823, 0.1802, 0.1331, 0.2003], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0094, 0.0121, 0.0096, 0.0100, 0.0092], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:21:31,625 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 13:21:56,083 INFO [finetune.py:976] (6/7) Epoch 11, batch 3100, loss[loss=0.205, simple_loss=0.2618, pruned_loss=0.0741, over 4708.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2605, pruned_loss=0.06428, over 952920.36 frames. ], batch size: 23, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:22:08,706 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60396.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 13:22:16,690 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60408.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 13:22:25,057 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:22:29,560 INFO [finetune.py:976] (6/7) Epoch 11, batch 3150, loss[loss=0.1923, simple_loss=0.2601, pruned_loss=0.06229, over 4817.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2585, pruned_loss=0.0642, over 953878.22 frames. ], batch size: 40, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:22:34,349 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60434.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:22:34,872 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.624e+02 1.838e+02 2.200e+02 4.980e+02, threshold=3.676e+02, percent-clipped=1.0 2023-03-26 13:23:01,695 INFO [finetune.py:976] (6/7) Epoch 11, batch 3200, loss[loss=0.1969, simple_loss=0.2639, pruned_loss=0.06489, over 4938.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2556, pruned_loss=0.0629, over 954498.73 frames. ], batch size: 33, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:23:20,583 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:23:37,317 INFO [finetune.py:976] (6/7) Epoch 11, batch 3250, loss[loss=0.2105, simple_loss=0.276, pruned_loss=0.07252, over 4926.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2562, pruned_loss=0.0633, over 955084.57 frames. ], batch size: 38, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:23:48,952 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.174e+02 1.626e+02 1.982e+02 2.397e+02 3.737e+02, threshold=3.964e+02, percent-clipped=1.0 2023-03-26 13:23:59,839 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60545.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:24:04,035 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:24:05,289 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=60554.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:24:27,367 INFO [finetune.py:976] (6/7) Epoch 11, batch 3300, loss[loss=0.1563, simple_loss=0.2313, pruned_loss=0.04062, over 4750.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2599, pruned_loss=0.06406, over 954005.98 frames. ], batch size: 26, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:24:29,122 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7752, 1.2214, 0.9610, 1.5924, 2.1715, 1.1475, 1.4545, 1.5192], device='cuda:6'), covar=tensor([0.1261, 0.2089, 0.1884, 0.1197, 0.1816, 0.1937, 0.1463, 0.1865], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0097, 0.0114, 0.0094, 0.0120, 0.0095, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:24:38,799 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0063, 1.8588, 1.5452, 1.8525, 1.7268, 1.7351, 1.7473, 2.5047], device='cuda:6'), covar=tensor([0.4561, 0.5420, 0.3856, 0.4999, 0.5162, 0.2806, 0.4989, 0.1925], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0259, 0.0223, 0.0278, 0.0243, 0.0210, 0.0246, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:24:45,617 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60593.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:25:05,246 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0794, 0.9943, 1.0432, 0.3823, 0.8874, 1.1921, 1.2517, 1.0639], device='cuda:6'), covar=tensor([0.0826, 0.0543, 0.0483, 0.0513, 0.0502, 0.0516, 0.0363, 0.0578], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0153, 0.0122, 0.0132, 0.0130, 0.0125, 0.0143, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.4151e-05, 1.1233e-04, 8.7654e-05, 9.5881e-05, 9.2729e-05, 9.1058e-05, 1.0444e-04, 1.0686e-04], device='cuda:6') 2023-03-26 13:25:28,938 INFO [finetune.py:976] (6/7) Epoch 11, batch 3350, loss[loss=0.1684, simple_loss=0.2312, pruned_loss=0.05277, over 4816.00 frames. ], tot_loss[loss=0.1963, simple_loss=0.2621, pruned_loss=0.06528, over 953770.33 frames. ], batch size: 25, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:25:34,911 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.701e+02 2.036e+02 2.469e+02 3.577e+02, threshold=4.071e+02, percent-clipped=0.0 2023-03-26 13:25:58,932 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.67 vs. limit=2.0 2023-03-26 13:26:02,926 INFO [finetune.py:976] (6/7) Epoch 11, batch 3400, loss[loss=0.23, simple_loss=0.2947, pruned_loss=0.08266, over 4928.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2626, pruned_loss=0.06511, over 954820.08 frames. ], batch size: 42, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:26:17,017 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:24,740 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:30,777 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=60718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:32,539 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60721.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:36,598 INFO [finetune.py:976] (6/7) Epoch 11, batch 3450, loss[loss=0.1599, simple_loss=0.2363, pruned_loss=0.04173, over 4805.00 frames. ], tot_loss[loss=0.1967, simple_loss=0.2632, pruned_loss=0.06508, over 955422.21 frames. ], batch size: 40, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:26:41,020 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60734.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:26:41,507 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.902e+01 1.594e+02 1.892e+02 2.253e+02 3.493e+02, threshold=3.783e+02, percent-clipped=0.0 2023-03-26 13:26:50,734 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-26 13:26:52,732 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60744.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:11,950 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:16,290 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3765, 1.4647, 1.1457, 1.2981, 1.7133, 1.6688, 1.5255, 1.3832], device='cuda:6'), covar=tensor([0.0355, 0.0392, 0.0850, 0.0416, 0.0280, 0.0494, 0.0360, 0.0474], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0110, 0.0142, 0.0115, 0.0103, 0.0105, 0.0094, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.1526e-05, 8.5438e-05, 1.1297e-04, 9.0023e-05, 8.0636e-05, 7.7906e-05, 7.1156e-05, 8.4181e-05], device='cuda:6') 2023-03-26 13:27:25,458 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60769.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:36,413 INFO [finetune.py:976] (6/7) Epoch 11, batch 3500, loss[loss=0.1609, simple_loss=0.2274, pruned_loss=0.04718, over 4929.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2594, pruned_loss=0.06362, over 957606.15 frames. ], batch size: 38, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:27:37,762 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=60779.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:27:45,018 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60782.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:28:12,298 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3518, 1.5037, 1.5221, 0.7483, 1.5189, 1.7173, 1.7945, 1.3589], device='cuda:6'), covar=tensor([0.0879, 0.0594, 0.0460, 0.0590, 0.0467, 0.0538, 0.0305, 0.0619], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0152, 0.0121, 0.0132, 0.0129, 0.0124, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3620e-05, 1.1167e-04, 8.7093e-05, 9.5406e-05, 9.2305e-05, 9.0382e-05, 1.0395e-04, 1.0654e-04], device='cuda:6') 2023-03-26 13:28:15,226 INFO [finetune.py:976] (6/7) Epoch 11, batch 3550, loss[loss=0.1917, simple_loss=0.2504, pruned_loss=0.06653, over 4896.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2577, pruned_loss=0.06355, over 957673.90 frames. ], batch size: 35, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:28:20,663 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.181e+02 1.566e+02 1.863e+02 2.348e+02 4.575e+02, threshold=3.726e+02, percent-clipped=4.0 2023-03-26 13:28:34,189 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=60854.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:28:34,220 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3037, 2.1711, 2.2171, 1.5293, 2.2552, 2.3363, 2.3103, 1.9031], device='cuda:6'), covar=tensor([0.0587, 0.0720, 0.0795, 0.1042, 0.0617, 0.0787, 0.0693, 0.1121], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0134, 0.0142, 0.0125, 0.0121, 0.0144, 0.0145, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:28:49,077 INFO [finetune.py:976] (6/7) Epoch 11, batch 3600, loss[loss=0.1594, simple_loss=0.2383, pruned_loss=0.04024, over 4771.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2554, pruned_loss=0.06267, over 958217.31 frames. ], batch size: 27, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:29:18,246 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=60902.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:29:39,495 INFO [finetune.py:976] (6/7) Epoch 11, batch 3650, loss[loss=0.2686, simple_loss=0.3317, pruned_loss=0.1027, over 4746.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2599, pruned_loss=0.06492, over 958186.44 frames. ], batch size: 54, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:29:44,368 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.638e+02 1.962e+02 2.312e+02 3.604e+02, threshold=3.924e+02, percent-clipped=0.0 2023-03-26 13:30:05,367 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.84 vs. limit=5.0 2023-03-26 13:30:33,789 INFO [finetune.py:976] (6/7) Epoch 11, batch 3700, loss[loss=0.2049, simple_loss=0.2694, pruned_loss=0.07019, over 4894.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2629, pruned_loss=0.06554, over 957282.95 frames. ], batch size: 32, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:31:15,832 INFO [finetune.py:976] (6/7) Epoch 11, batch 3750, loss[loss=0.2131, simple_loss=0.2725, pruned_loss=0.07687, over 4896.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2627, pruned_loss=0.06489, over 957209.42 frames. ], batch size: 43, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:31:20,652 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.587e+02 1.819e+02 2.276e+02 4.586e+02, threshold=3.638e+02, percent-clipped=1.0 2023-03-26 13:31:38,946 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61061.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:31:39,055 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.26 vs. limit=5.0 2023-03-26 13:31:47,663 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=61074.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:31:49,400 INFO [finetune.py:976] (6/7) Epoch 11, batch 3800, loss[loss=0.1765, simple_loss=0.2495, pruned_loss=0.0518, over 4917.00 frames. ], tot_loss[loss=0.1965, simple_loss=0.2632, pruned_loss=0.06487, over 957730.18 frames. ], batch size: 33, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:32:29,717 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:32:32,632 INFO [finetune.py:976] (6/7) Epoch 11, batch 3850, loss[loss=0.153, simple_loss=0.2273, pruned_loss=0.03931, over 4904.00 frames. ], tot_loss[loss=0.1949, simple_loss=0.2614, pruned_loss=0.06417, over 956790.38 frames. ], batch size: 46, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:32:37,923 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.518e+02 1.864e+02 2.279e+02 4.215e+02, threshold=3.727e+02, percent-clipped=1.0 2023-03-26 13:32:44,167 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1976, 2.0236, 2.2610, 1.5317, 2.2352, 2.3644, 2.2492, 1.4888], device='cuda:6'), covar=tensor([0.0627, 0.0762, 0.0681, 0.1084, 0.0600, 0.0648, 0.0649, 0.1790], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0133, 0.0142, 0.0124, 0.0120, 0.0143, 0.0144, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:32:45,620 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.26 vs. limit=5.0 2023-03-26 13:33:05,951 INFO [finetune.py:976] (6/7) Epoch 11, batch 3900, loss[loss=0.2087, simple_loss=0.2607, pruned_loss=0.07832, over 4806.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2588, pruned_loss=0.06355, over 957202.03 frames. ], batch size: 25, lr: 3.69e-03, grad_scale: 32.0 2023-03-26 13:33:22,540 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 13:33:39,747 INFO [finetune.py:976] (6/7) Epoch 11, batch 3950, loss[loss=0.2396, simple_loss=0.2812, pruned_loss=0.09903, over 4868.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2562, pruned_loss=0.06296, over 957785.75 frames. ], batch size: 31, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:33:45,063 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.570e+02 1.907e+02 2.309e+02 4.377e+02, threshold=3.813e+02, percent-clipped=3.0 2023-03-26 13:33:54,393 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.84 vs. limit=5.0 2023-03-26 13:34:03,121 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2394, 2.1249, 2.0562, 2.4263, 2.8162, 2.4150, 2.5226, 1.7848], device='cuda:6'), covar=tensor([0.2205, 0.2114, 0.1940, 0.1644, 0.1709, 0.1072, 0.1872, 0.1863], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0206, 0.0208, 0.0188, 0.0240, 0.0181, 0.0211, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:34:12,138 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 13:34:12,379 INFO [finetune.py:976] (6/7) Epoch 11, batch 4000, loss[loss=0.2069, simple_loss=0.2595, pruned_loss=0.07712, over 4062.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2557, pruned_loss=0.06283, over 957107.44 frames. ], batch size: 18, lr: 3.68e-03, grad_scale: 64.0 2023-03-26 13:34:55,752 INFO [finetune.py:976] (6/7) Epoch 11, batch 4050, loss[loss=0.1972, simple_loss=0.2763, pruned_loss=0.05907, over 4825.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2598, pruned_loss=0.06473, over 953538.31 frames. ], batch size: 39, lr: 3.68e-03, grad_scale: 64.0 2023-03-26 13:35:04,890 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.652e+02 2.086e+02 2.571e+02 4.987e+02, threshold=4.171e+02, percent-clipped=6.0 2023-03-26 13:35:10,646 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 13:35:23,168 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8755, 4.0634, 3.8866, 2.0383, 4.1361, 3.0956, 0.8377, 2.7413], device='cuda:6'), covar=tensor([0.2502, 0.1437, 0.1298, 0.2922, 0.0849, 0.0898, 0.4164, 0.1444], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0160, 0.0129, 0.0156, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 13:35:33,871 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4953, 1.4844, 1.8875, 1.9684, 1.6422, 3.4868, 1.3480, 1.5799], device='cuda:6'), covar=tensor([0.0964, 0.1801, 0.0990, 0.0877, 0.1585, 0.0274, 0.1487, 0.1766], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:35:36,714 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4736, 1.5144, 1.8184, 1.8839, 1.6258, 3.2619, 1.3613, 1.6268], device='cuda:6'), covar=tensor([0.0881, 0.1640, 0.1154, 0.0822, 0.1411, 0.0266, 0.1371, 0.1552], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:35:36,765 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0307, 1.8335, 1.6541, 1.8617, 1.7755, 1.7492, 1.7797, 2.4855], device='cuda:6'), covar=tensor([0.4026, 0.4784, 0.3400, 0.4441, 0.4415, 0.2474, 0.4445, 0.1722], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0260, 0.0223, 0.0278, 0.0243, 0.0210, 0.0245, 0.0215], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:35:41,814 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=61374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:35:43,533 INFO [finetune.py:976] (6/7) Epoch 11, batch 4100, loss[loss=0.2708, simple_loss=0.3334, pruned_loss=0.1041, over 4815.00 frames. ], tot_loss[loss=0.1991, simple_loss=0.2643, pruned_loss=0.06695, over 953482.15 frames. ], batch size: 51, lr: 3.68e-03, grad_scale: 64.0 2023-03-26 13:35:47,828 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1001, 1.6959, 1.8198, 1.9127, 1.6187, 1.7163, 1.8432, 1.7381], device='cuda:6'), covar=tensor([0.5524, 0.6010, 0.5436, 0.6001, 0.7324, 0.5907, 0.7860, 0.5311], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0239, 0.0254, 0.0259, 0.0256, 0.0231, 0.0275, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:36:16,502 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=61417.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:36:20,073 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=61422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:36:26,638 INFO [finetune.py:976] (6/7) Epoch 11, batch 4150, loss[loss=0.1883, simple_loss=0.2562, pruned_loss=0.0602, over 4747.00 frames. ], tot_loss[loss=0.1982, simple_loss=0.2641, pruned_loss=0.06614, over 954097.02 frames. ], batch size: 27, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:36:32,504 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.629e+02 1.982e+02 2.519e+02 5.426e+02, threshold=3.964e+02, percent-clipped=4.0 2023-03-26 13:36:59,823 INFO [finetune.py:976] (6/7) Epoch 11, batch 4200, loss[loss=0.2018, simple_loss=0.2612, pruned_loss=0.07122, over 4740.00 frames. ], tot_loss[loss=0.1975, simple_loss=0.2638, pruned_loss=0.06557, over 954234.08 frames. ], batch size: 27, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:37:09,308 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6532, 1.7054, 1.7392, 0.9937, 1.8112, 1.9816, 1.9674, 1.4750], device='cuda:6'), covar=tensor([0.0844, 0.0524, 0.0488, 0.0536, 0.0396, 0.0552, 0.0332, 0.0666], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0153, 0.0121, 0.0132, 0.0130, 0.0125, 0.0143, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3933e-05, 1.1238e-04, 8.7431e-05, 9.5192e-05, 9.2635e-05, 9.1071e-05, 1.0458e-04, 1.0663e-04], device='cuda:6') 2023-03-26 13:37:13,616 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 13:37:15,738 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4673, 1.3588, 1.3822, 1.3372, 0.8481, 2.2126, 0.7824, 1.2214], device='cuda:6'), covar=tensor([0.3568, 0.2581, 0.2221, 0.2618, 0.2019, 0.0389, 0.2852, 0.1450], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0099, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:37:35,245 INFO [finetune.py:976] (6/7) Epoch 11, batch 4250, loss[loss=0.1437, simple_loss=0.2113, pruned_loss=0.03805, over 4716.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2609, pruned_loss=0.06425, over 955685.61 frames. ], batch size: 23, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:37:45,940 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.771e+01 1.547e+02 1.858e+02 2.245e+02 5.805e+02, threshold=3.715e+02, percent-clipped=2.0 2023-03-26 13:38:06,508 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4343, 1.2988, 1.3219, 1.4050, 1.6815, 1.5453, 1.3721, 1.2484], device='cuda:6'), covar=tensor([0.0320, 0.0318, 0.0556, 0.0294, 0.0250, 0.0432, 0.0378, 0.0400], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0108, 0.0140, 0.0114, 0.0102, 0.0103, 0.0092, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0826e-05, 8.4064e-05, 1.1143e-04, 8.9238e-05, 7.9646e-05, 7.6235e-05, 6.9773e-05, 8.2291e-05], device='cuda:6') 2023-03-26 13:38:11,455 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9390, 1.3198, 1.9055, 1.8613, 1.6935, 1.6397, 1.7880, 1.7461], device='cuda:6'), covar=tensor([0.4108, 0.4723, 0.3999, 0.4479, 0.5490, 0.4375, 0.5258, 0.3821], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0238, 0.0253, 0.0258, 0.0255, 0.0231, 0.0274, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:38:15,489 INFO [finetune.py:976] (6/7) Epoch 11, batch 4300, loss[loss=0.1875, simple_loss=0.2475, pruned_loss=0.06371, over 4811.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2588, pruned_loss=0.06432, over 954551.45 frames. ], batch size: 41, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:38:33,385 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4805, 2.8831, 2.8048, 1.3589, 2.9962, 2.2328, 0.6601, 1.9217], device='cuda:6'), covar=tensor([0.2147, 0.2329, 0.1799, 0.3734, 0.1372, 0.1275, 0.4295, 0.1948], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0173, 0.0159, 0.0128, 0.0155, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 13:38:48,430 INFO [finetune.py:976] (6/7) Epoch 11, batch 4350, loss[loss=0.2267, simple_loss=0.2925, pruned_loss=0.0804, over 4810.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2546, pruned_loss=0.06238, over 954816.04 frames. ], batch size: 41, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:38:54,824 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.580e+02 1.801e+02 2.212e+02 3.446e+02, threshold=3.603e+02, percent-clipped=0.0 2023-03-26 13:39:12,813 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0584, 2.0732, 2.1217, 1.5510, 2.1945, 2.3276, 2.1641, 1.6610], device='cuda:6'), covar=tensor([0.0554, 0.0583, 0.0709, 0.0864, 0.0566, 0.0644, 0.0585, 0.1161], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0134, 0.0142, 0.0124, 0.0120, 0.0143, 0.0144, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:39:21,856 INFO [finetune.py:976] (6/7) Epoch 11, batch 4400, loss[loss=0.2564, simple_loss=0.3133, pruned_loss=0.0998, over 4742.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2567, pruned_loss=0.0638, over 953386.83 frames. ], batch size: 59, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:39:36,487 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.5942, 3.1997, 2.8935, 1.7041, 3.0402, 2.6213, 2.4884, 2.9028], device='cuda:6'), covar=tensor([0.0972, 0.0760, 0.1571, 0.2073, 0.1486, 0.1882, 0.1873, 0.1027], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0197, 0.0201, 0.0185, 0.0214, 0.0206, 0.0221, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:39:44,548 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.90 vs. limit=5.0 2023-03-26 13:39:53,808 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=61717.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:03,848 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0271, 1.9980, 1.8191, 2.2225, 2.6846, 2.0724, 1.8734, 1.4695], device='cuda:6'), covar=tensor([0.2437, 0.2092, 0.2044, 0.1691, 0.1787, 0.1183, 0.2280, 0.2053], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0208, 0.0209, 0.0190, 0.0242, 0.0182, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:40:04,332 INFO [finetune.py:976] (6/7) Epoch 11, batch 4450, loss[loss=0.2006, simple_loss=0.2812, pruned_loss=0.05998, over 4817.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.26, pruned_loss=0.06452, over 952990.11 frames. ], batch size: 30, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:40:07,490 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61732.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:14,299 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.223e+02 1.628e+02 1.977e+02 2.534e+02 3.640e+02, threshold=3.954e+02, percent-clipped=2.0 2023-03-26 13:40:15,536 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9594, 1.8406, 1.5371, 1.8636, 1.7659, 1.7714, 1.8055, 2.4757], device='cuda:6'), covar=tensor([0.4068, 0.4921, 0.3514, 0.4441, 0.4484, 0.2543, 0.4555, 0.1859], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0224, 0.0278, 0.0244, 0.0210, 0.0246, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:40:23,501 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61743.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:45,763 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 13:40:49,787 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=61765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:40:57,017 INFO [finetune.py:976] (6/7) Epoch 11, batch 4500, loss[loss=0.2586, simple_loss=0.3076, pruned_loss=0.1048, over 4835.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2619, pruned_loss=0.06524, over 952787.71 frames. ], batch size: 49, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:41:07,874 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61793.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:41:16,092 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:41:33,078 INFO [finetune.py:976] (6/7) Epoch 11, batch 4550, loss[loss=0.2217, simple_loss=0.2891, pruned_loss=0.07714, over 4883.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2623, pruned_loss=0.06501, over 951896.21 frames. ], batch size: 32, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:41:43,496 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.535e+01 1.607e+02 1.951e+02 2.245e+02 3.846e+02, threshold=3.902e+02, percent-clipped=0.0 2023-03-26 13:42:14,154 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0068, 1.7978, 1.4616, 1.5388, 1.7363, 1.6662, 1.7262, 2.4332], device='cuda:6'), covar=tensor([0.4121, 0.4498, 0.3684, 0.4289, 0.4506, 0.2500, 0.4218, 0.1763], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0223, 0.0277, 0.0243, 0.0210, 0.0246, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:42:15,224 INFO [finetune.py:976] (6/7) Epoch 11, batch 4600, loss[loss=0.2281, simple_loss=0.281, pruned_loss=0.08759, over 4738.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.2614, pruned_loss=0.06445, over 952593.26 frames. ], batch size: 59, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:42:40,450 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=61914.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:42:48,594 INFO [finetune.py:976] (6/7) Epoch 11, batch 4650, loss[loss=0.2181, simple_loss=0.275, pruned_loss=0.08057, over 4910.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2598, pruned_loss=0.06455, over 950963.68 frames. ], batch size: 46, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:42:56,051 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.606e+02 1.934e+02 2.317e+02 5.626e+02, threshold=3.867e+02, percent-clipped=3.0 2023-03-26 13:43:31,738 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=61975.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:43:32,825 INFO [finetune.py:976] (6/7) Epoch 11, batch 4700, loss[loss=0.1658, simple_loss=0.2275, pruned_loss=0.05205, over 4828.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.257, pruned_loss=0.06371, over 952567.43 frames. ], batch size: 41, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:43:57,714 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1980, 1.9120, 1.6130, 1.8704, 1.8643, 1.8971, 1.8894, 2.6480], device='cuda:6'), covar=tensor([0.4486, 0.5398, 0.3942, 0.5001, 0.4816, 0.2906, 0.4707, 0.1927], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0223, 0.0277, 0.0243, 0.0210, 0.0246, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:44:17,455 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8625, 1.7490, 1.6468, 1.8363, 1.4261, 4.2708, 1.6728, 2.1853], device='cuda:6'), covar=tensor([0.3265, 0.2514, 0.2123, 0.2234, 0.1642, 0.0113, 0.2544, 0.1222], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 13:44:19,778 INFO [finetune.py:976] (6/7) Epoch 11, batch 4750, loss[loss=0.1851, simple_loss=0.2591, pruned_loss=0.0556, over 4908.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2551, pruned_loss=0.06311, over 952221.88 frames. ], batch size: 37, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:44:25,600 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.474e+02 1.769e+02 2.148e+02 4.944e+02, threshold=3.539e+02, percent-clipped=1.0 2023-03-26 13:44:53,404 INFO [finetune.py:976] (6/7) Epoch 11, batch 4800, loss[loss=0.2031, simple_loss=0.2636, pruned_loss=0.07131, over 4816.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2592, pruned_loss=0.06476, over 952982.88 frames. ], batch size: 25, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:45:06,473 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:45:08,309 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9263, 1.5106, 0.8073, 1.9597, 2.2700, 1.6456, 1.6294, 1.7594], device='cuda:6'), covar=tensor([0.1928, 0.2811, 0.2795, 0.1537, 0.2423, 0.2622, 0.2069, 0.2869], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0093, 0.0120, 0.0094, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:45:09,599 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8586, 1.0115, 1.7966, 1.7757, 1.6025, 1.5622, 1.6093, 1.6437], device='cuda:6'), covar=tensor([0.3764, 0.4414, 0.3627, 0.3593, 0.4839, 0.3629, 0.4622, 0.3438], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0238, 0.0254, 0.0259, 0.0255, 0.0230, 0.0274, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:45:12,016 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.8856, 0.8468, 0.8468, 0.9876, 1.0629, 0.9826, 0.9538, 0.8674], device='cuda:6'), covar=tensor([0.0413, 0.0257, 0.0560, 0.0244, 0.0269, 0.0447, 0.0296, 0.0371], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0109, 0.0141, 0.0114, 0.0102, 0.0104, 0.0093, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.1772e-05, 8.4704e-05, 1.1218e-04, 8.9496e-05, 7.9769e-05, 7.7161e-05, 6.9946e-05, 8.3083e-05], device='cuda:6') 2023-03-26 13:45:13,198 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62099.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:45:28,995 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8250, 1.5851, 0.8963, 1.8637, 2.2830, 1.5025, 1.8550, 1.7675], device='cuda:6'), covar=tensor([0.1383, 0.2022, 0.2008, 0.1122, 0.1701, 0.1673, 0.1368, 0.2063], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0093, 0.0120, 0.0094, 0.0099, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 13:45:47,604 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62122.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:45:50,589 INFO [finetune.py:976] (6/7) Epoch 11, batch 4850, loss[loss=0.2311, simple_loss=0.2975, pruned_loss=0.08232, over 4804.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.263, pruned_loss=0.06583, over 950231.98 frames. ], batch size: 51, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:46:01,539 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.730e+02 2.037e+02 2.587e+02 8.043e+02, threshold=4.075e+02, percent-clipped=4.0 2023-03-26 13:46:45,240 INFO [finetune.py:976] (6/7) Epoch 11, batch 4900, loss[loss=0.1673, simple_loss=0.2417, pruned_loss=0.04646, over 4869.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2639, pruned_loss=0.06598, over 951743.05 frames. ], batch size: 34, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:46:54,044 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62183.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:47:49,180 INFO [finetune.py:976] (6/7) Epoch 11, batch 4950, loss[loss=0.1593, simple_loss=0.2274, pruned_loss=0.04563, over 4818.00 frames. ], tot_loss[loss=0.199, simple_loss=0.2649, pruned_loss=0.06655, over 953522.56 frames. ], batch size: 33, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:47:56,654 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.284e+02 1.728e+02 2.029e+02 2.471e+02 5.736e+02, threshold=4.057e+02, percent-clipped=2.0 2023-03-26 13:48:17,779 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1820, 1.9733, 1.7527, 1.9480, 1.9562, 1.8781, 1.9651, 2.6243], device='cuda:6'), covar=tensor([0.4355, 0.4647, 0.3559, 0.4428, 0.4156, 0.2734, 0.4346, 0.1924], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0261, 0.0224, 0.0278, 0.0243, 0.0210, 0.0247, 0.0217], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:48:18,910 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62270.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:48:21,274 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:48:24,032 INFO [finetune.py:976] (6/7) Epoch 11, batch 5000, loss[loss=0.2037, simple_loss=0.2676, pruned_loss=0.06989, over 4925.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2622, pruned_loss=0.06529, over 954267.93 frames. ], batch size: 38, lr: 3.68e-03, grad_scale: 32.0 2023-03-26 13:48:57,127 INFO [finetune.py:976] (6/7) Epoch 11, batch 5050, loss[loss=0.154, simple_loss=0.2216, pruned_loss=0.04317, over 4758.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2598, pruned_loss=0.06468, over 955186.69 frames. ], batch size: 28, lr: 3.68e-03, grad_scale: 16.0 2023-03-26 13:49:02,471 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62334.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:04,172 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.504e+02 1.759e+02 2.068e+02 4.473e+02, threshold=3.518e+02, percent-clipped=1.0 2023-03-26 13:49:32,191 INFO [finetune.py:976] (6/7) Epoch 11, batch 5100, loss[loss=0.1727, simple_loss=0.2341, pruned_loss=0.05564, over 4935.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2555, pruned_loss=0.06231, over 956193.57 frames. ], batch size: 33, lr: 3.68e-03, grad_scale: 16.0 2023-03-26 13:49:39,743 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 13:49:40,534 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:42,305 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62391.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:47,641 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:49:48,068 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 13:50:05,685 INFO [finetune.py:976] (6/7) Epoch 11, batch 5150, loss[loss=0.2418, simple_loss=0.3187, pruned_loss=0.08242, over 4897.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2564, pruned_loss=0.06287, over 957729.85 frames. ], batch size: 43, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:50:12,137 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62436.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:12,676 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.578e+02 2.001e+02 2.432e+02 3.455e+02, threshold=4.003e+02, percent-clipped=0.0 2023-03-26 13:50:26,781 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:33,935 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62452.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:50:55,270 INFO [finetune.py:976] (6/7) Epoch 11, batch 5200, loss[loss=0.1413, simple_loss=0.2091, pruned_loss=0.03681, over 4491.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2599, pruned_loss=0.06443, over 956333.14 frames. ], batch size: 19, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:51:00,100 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62478.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:51:36,856 INFO [finetune.py:976] (6/7) Epoch 11, batch 5250, loss[loss=0.1703, simple_loss=0.2268, pruned_loss=0.05697, over 4759.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2624, pruned_loss=0.06519, over 953526.71 frames. ], batch size: 26, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:51:42,817 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7301, 1.6535, 1.3767, 1.7762, 2.2127, 1.8682, 1.3928, 1.4281], device='cuda:6'), covar=tensor([0.2037, 0.1860, 0.1962, 0.1550, 0.1584, 0.1133, 0.2420, 0.1839], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0208, 0.0209, 0.0188, 0.0242, 0.0182, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:51:54,384 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.618e+02 1.949e+02 2.406e+02 7.235e+02, threshold=3.897e+02, percent-clipped=3.0 2023-03-26 13:52:03,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62545.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:06,438 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9673, 1.8525, 1.5361, 1.8191, 1.7418, 1.6986, 1.7382, 2.5011], device='cuda:6'), covar=tensor([0.4299, 0.4704, 0.3697, 0.4140, 0.4322, 0.2534, 0.4354, 0.1706], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0224, 0.0277, 0.0243, 0.0210, 0.0247, 0.0217], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:52:08,880 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4208, 2.2959, 1.9748, 2.2166, 2.3848, 2.1196, 2.6957, 2.3646], device='cuda:6'), covar=tensor([0.1335, 0.2237, 0.3078, 0.2860, 0.2506, 0.1779, 0.3080, 0.1938], device='cuda:6'), in_proj_covar=tensor([0.0176, 0.0187, 0.0234, 0.0255, 0.0241, 0.0197, 0.0212, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:52:19,514 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62570.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:23,693 INFO [finetune.py:976] (6/7) Epoch 11, batch 5300, loss[loss=0.211, simple_loss=0.2808, pruned_loss=0.07058, over 4920.00 frames. ], tot_loss[loss=0.1977, simple_loss=0.2631, pruned_loss=0.06609, over 953392.67 frames. ], batch size: 33, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:52:29,614 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:31,263 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2944, 3.7578, 3.9307, 4.1488, 4.0191, 3.8366, 4.4193, 1.3901], device='cuda:6'), covar=tensor([0.0775, 0.0823, 0.0799, 0.0915, 0.1302, 0.1455, 0.0674, 0.5219], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0279, 0.0292, 0.0334, 0.0287, 0.0305, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:52:44,283 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62606.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:52,191 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62618.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:52:57,606 INFO [finetune.py:976] (6/7) Epoch 11, batch 5350, loss[loss=0.1995, simple_loss=0.2604, pruned_loss=0.06929, over 4701.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2622, pruned_loss=0.06494, over 951224.52 frames. ], batch size: 23, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:52:58,902 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:53:04,201 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.504e+02 1.845e+02 2.238e+02 3.589e+02, threshold=3.690e+02, percent-clipped=0.0 2023-03-26 13:53:10,772 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62646.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:53:24,277 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62666.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:53:30,762 INFO [finetune.py:976] (6/7) Epoch 11, batch 5400, loss[loss=0.1485, simple_loss=0.2138, pruned_loss=0.04165, over 4803.00 frames. ], tot_loss[loss=0.1941, simple_loss=0.2597, pruned_loss=0.06426, over 951661.77 frames. ], batch size: 51, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:54:04,662 INFO [finetune.py:976] (6/7) Epoch 11, batch 5450, loss[loss=0.1743, simple_loss=0.244, pruned_loss=0.05232, over 4828.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.257, pruned_loss=0.06324, over 951593.43 frames. ], batch size: 33, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:54:04,798 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62727.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:10,763 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.513e+01 1.463e+02 1.876e+02 2.335e+02 4.427e+02, threshold=3.751e+02, percent-clipped=2.0 2023-03-26 13:54:17,808 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62747.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:36,197 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=62773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:54:38,554 INFO [finetune.py:976] (6/7) Epoch 11, batch 5500, loss[loss=0.1966, simple_loss=0.26, pruned_loss=0.06659, over 4907.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2539, pruned_loss=0.06202, over 952806.12 frames. ], batch size: 36, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:54:39,234 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62778.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:55:11,792 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7959, 1.2847, 1.8458, 1.7638, 1.5224, 1.5407, 1.6918, 1.6793], device='cuda:6'), covar=tensor([0.4129, 0.4490, 0.3661, 0.4034, 0.5270, 0.3914, 0.5098, 0.3617], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0238, 0.0253, 0.0259, 0.0255, 0.0230, 0.0274, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:55:12,351 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62826.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:55:12,912 INFO [finetune.py:976] (6/7) Epoch 11, batch 5550, loss[loss=0.1984, simple_loss=0.2636, pruned_loss=0.06658, over 4798.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2565, pruned_loss=0.06301, over 953131.93 frames. ], batch size: 29, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:55:17,993 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=62834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:55:19,880 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.162e+02 1.580e+02 1.841e+02 2.336e+02 5.980e+02, threshold=3.683e+02, percent-clipped=6.0 2023-03-26 13:55:30,344 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8433, 3.5836, 3.5329, 1.7775, 3.7428, 2.8556, 0.9906, 2.5138], device='cuda:6'), covar=tensor([0.2421, 0.1759, 0.1300, 0.3013, 0.0991, 0.0869, 0.3739, 0.1403], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0173, 0.0158, 0.0128, 0.0155, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 13:56:07,689 INFO [finetune.py:976] (6/7) Epoch 11, batch 5600, loss[loss=0.1769, simple_loss=0.2604, pruned_loss=0.0467, over 4924.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.259, pruned_loss=0.06301, over 954241.83 frames. ], batch size: 38, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:56:13,064 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0488, 1.9047, 1.6477, 1.8636, 1.8900, 1.8474, 1.8363, 2.6028], device='cuda:6'), covar=tensor([0.4853, 0.5610, 0.4018, 0.5249, 0.4842, 0.2789, 0.4989, 0.1999], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0260, 0.0222, 0.0275, 0.0242, 0.0209, 0.0244, 0.0215], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:56:22,195 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62901.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:56:37,252 INFO [finetune.py:976] (6/7) Epoch 11, batch 5650, loss[loss=0.1812, simple_loss=0.2556, pruned_loss=0.0534, over 4868.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2613, pruned_loss=0.06357, over 951598.64 frames. ], batch size: 44, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:56:38,487 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=62929.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:56:48,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.326e+01 1.606e+02 1.910e+02 2.279e+02 4.497e+02, threshold=3.820e+02, percent-clipped=2.0 2023-03-26 13:56:51,137 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=62941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:57:18,748 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0050, 2.6647, 3.2471, 2.2185, 2.8871, 3.2639, 2.4399, 3.3187], device='cuda:6'), covar=tensor([0.1271, 0.1909, 0.1168, 0.2006, 0.0855, 0.1455, 0.2554, 0.0739], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0208, 0.0195, 0.0193, 0.0181, 0.0217, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:57:23,400 INFO [finetune.py:976] (6/7) Epoch 11, batch 5700, loss[loss=0.1678, simple_loss=0.2452, pruned_loss=0.04522, over 3536.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.257, pruned_loss=0.06282, over 932934.77 frames. ], batch size: 15, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:57:23,435 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=62977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:57:54,980 INFO [finetune.py:976] (6/7) Epoch 12, batch 0, loss[loss=0.1727, simple_loss=0.2503, pruned_loss=0.04756, over 4890.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2503, pruned_loss=0.04756, over 4890.00 frames. ], batch size: 37, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:57:54,980 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 13:58:04,696 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1159, 1.9866, 1.9393, 1.8585, 1.9514, 1.8798, 1.9844, 2.5655], device='cuda:6'), covar=tensor([0.3972, 0.4952, 0.3580, 0.4008, 0.4211, 0.2523, 0.3937, 0.1939], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0260, 0.0222, 0.0275, 0.0242, 0.0209, 0.0245, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:58:11,587 INFO [finetune.py:1010] (6/7) Epoch 12, validation: loss=0.16, simple_loss=0.2305, pruned_loss=0.04472, over 2265189.00 frames. 2023-03-26 13:58:11,587 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 13:58:19,013 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9507, 1.9449, 2.0338, 1.3201, 1.9366, 2.0602, 2.0585, 1.6469], device='cuda:6'), covar=tensor([0.0646, 0.0622, 0.0737, 0.0958, 0.0753, 0.0677, 0.0642, 0.1145], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0141, 0.0125, 0.0121, 0.0143, 0.0143, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:58:22,061 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63022.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:58:37,035 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.590e+02 1.966e+02 2.351e+02 4.424e+02, threshold=3.931e+02, percent-clipped=2.0 2023-03-26 13:58:49,962 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63047.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:59:00,860 INFO [finetune.py:976] (6/7) Epoch 12, batch 50, loss[loss=0.2253, simple_loss=0.2934, pruned_loss=0.07865, over 4863.00 frames. ], tot_loss[loss=0.196, simple_loss=0.2631, pruned_loss=0.06447, over 217157.43 frames. ], batch size: 34, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 13:59:11,682 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 13:59:31,024 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3499, 2.6875, 2.3572, 1.5911, 2.5836, 2.6795, 2.6244, 2.3113], device='cuda:6'), covar=tensor([0.0646, 0.0545, 0.0779, 0.1002, 0.0792, 0.0678, 0.0641, 0.0997], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0135, 0.0142, 0.0126, 0.0122, 0.0144, 0.0144, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 13:59:31,708 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 13:59:42,653 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63095.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 13:59:54,680 INFO [finetune.py:976] (6/7) Epoch 12, batch 100, loss[loss=0.1804, simple_loss=0.2563, pruned_loss=0.05226, over 4819.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2572, pruned_loss=0.06294, over 381666.32 frames. ], batch size: 30, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:00:15,390 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:00:21,135 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.723e+02 1.978e+02 2.544e+02 5.107e+02, threshold=3.957e+02, percent-clipped=1.0 2023-03-26 14:00:30,955 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7196, 1.6008, 2.2872, 2.0606, 1.7631, 4.0365, 1.5061, 1.7452], device='cuda:6'), covar=tensor([0.0885, 0.1784, 0.1120, 0.0881, 0.1642, 0.0200, 0.1520, 0.1682], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:00:50,144 INFO [finetune.py:976] (6/7) Epoch 12, batch 150, loss[loss=0.1521, simple_loss=0.2166, pruned_loss=0.04381, over 4706.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2531, pruned_loss=0.06124, over 509162.33 frames. ], batch size: 23, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:01:05,171 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 14:01:23,919 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5093, 3.9156, 4.0908, 4.3085, 4.2344, 4.0380, 4.6230, 1.4226], device='cuda:6'), covar=tensor([0.0755, 0.0845, 0.0870, 0.1023, 0.1266, 0.1573, 0.0647, 0.5662], device='cuda:6'), in_proj_covar=tensor([0.0345, 0.0242, 0.0274, 0.0288, 0.0328, 0.0281, 0.0299, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:01:47,597 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63201.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:01:55,883 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.66 vs. limit=2.0 2023-03-26 14:01:56,079 INFO [finetune.py:976] (6/7) Epoch 12, batch 200, loss[loss=0.2064, simple_loss=0.271, pruned_loss=0.07091, over 4188.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2532, pruned_loss=0.06236, over 608076.63 frames. ], batch size: 65, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:02:17,488 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63221.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:02:32,829 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.551e+02 1.870e+02 2.223e+02 3.918e+02, threshold=3.740e+02, percent-clipped=0.0 2023-03-26 14:02:41,477 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:02:46,790 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63249.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:02:51,341 INFO [finetune.py:976] (6/7) Epoch 12, batch 250, loss[loss=0.2124, simple_loss=0.2815, pruned_loss=0.07161, over 4903.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2535, pruned_loss=0.06218, over 684174.59 frames. ], batch size: 36, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:03:08,678 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63282.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:03:13,377 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:03:14,433 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 14:03:17,596 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1553, 1.3380, 1.1944, 1.3239, 1.4948, 2.4780, 1.2486, 1.5008], device='cuda:6'), covar=tensor([0.1070, 0.1980, 0.1167, 0.1029, 0.1752, 0.0395, 0.1627, 0.1788], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0092, 0.0082, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:03:23,527 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8486, 1.3065, 1.8985, 1.8193, 1.6285, 1.5674, 1.7617, 1.7238], device='cuda:6'), covar=tensor([0.4357, 0.4779, 0.3944, 0.4035, 0.5334, 0.4106, 0.5206, 0.3755], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0239, 0.0256, 0.0261, 0.0258, 0.0232, 0.0276, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:03:23,974 INFO [finetune.py:976] (6/7) Epoch 12, batch 300, loss[loss=0.1739, simple_loss=0.2477, pruned_loss=0.05005, over 4833.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2583, pruned_loss=0.06367, over 745814.71 frames. ], batch size: 33, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:03:40,236 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63322.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:03:49,464 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4633, 1.3500, 1.5365, 1.6511, 1.5085, 3.0120, 1.2706, 1.4946], device='cuda:6'), covar=tensor([0.0951, 0.1756, 0.1064, 0.0945, 0.1536, 0.0275, 0.1507, 0.1676], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0092, 0.0082, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:03:51,185 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.663e+02 2.076e+02 2.406e+02 5.777e+02, threshold=4.151e+02, percent-clipped=4.0 2023-03-26 14:04:08,575 INFO [finetune.py:976] (6/7) Epoch 12, batch 350, loss[loss=0.1744, simple_loss=0.247, pruned_loss=0.05091, over 4825.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2597, pruned_loss=0.06374, over 794052.19 frames. ], batch size: 47, lr: 3.67e-03, grad_scale: 16.0 2023-03-26 14:04:27,603 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63370.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:04:55,918 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4568, 1.3411, 1.3754, 1.3221, 0.8803, 2.2594, 0.7879, 1.1904], device='cuda:6'), covar=tensor([0.3466, 0.2548, 0.2164, 0.2415, 0.1924, 0.0361, 0.2816, 0.1427], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:04:59,619 INFO [finetune.py:976] (6/7) Epoch 12, batch 400, loss[loss=0.1483, simple_loss=0.2073, pruned_loss=0.04468, over 4822.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.26, pruned_loss=0.06348, over 829946.72 frames. ], batch size: 25, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:05:10,701 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63420.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:12,412 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63422.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:16,648 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63429.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:21,322 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.591e+02 1.854e+02 2.332e+02 4.296e+02, threshold=3.709e+02, percent-clipped=1.0 2023-03-26 14:05:38,142 INFO [finetune.py:976] (6/7) Epoch 12, batch 450, loss[loss=0.2108, simple_loss=0.2764, pruned_loss=0.07258, over 4714.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2596, pruned_loss=0.06317, over 857645.69 frames. ], batch size: 23, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:05:57,273 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63477.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:05:59,774 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:06:00,963 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63483.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:06:15,173 INFO [finetune.py:976] (6/7) Epoch 12, batch 500, loss[loss=0.2042, simple_loss=0.269, pruned_loss=0.0697, over 4868.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2574, pruned_loss=0.06271, over 878725.59 frames. ], batch size: 31, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:06:37,049 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.336e+01 1.553e+02 1.855e+02 2.331e+02 4.193e+02, threshold=3.711e+02, percent-clipped=1.0 2023-03-26 14:06:48,875 INFO [finetune.py:976] (6/7) Epoch 12, batch 550, loss[loss=0.2351, simple_loss=0.2773, pruned_loss=0.09646, over 4820.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2546, pruned_loss=0.06175, over 895200.78 frames. ], batch size: 41, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:06:58,446 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:04,314 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63577.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:10,286 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:16,926 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7234, 1.7905, 1.5590, 1.9582, 2.3052, 1.9538, 1.5785, 1.4027], device='cuda:6'), covar=tensor([0.2451, 0.2095, 0.2042, 0.1683, 0.1888, 0.1288, 0.2592, 0.2210], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0206, 0.0208, 0.0188, 0.0241, 0.0181, 0.0211, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:07:22,327 INFO [finetune.py:976] (6/7) Epoch 12, batch 600, loss[loss=0.2651, simple_loss=0.3144, pruned_loss=0.1079, over 4920.00 frames. ], tot_loss[loss=0.191, simple_loss=0.256, pruned_loss=0.06298, over 909686.51 frames. ], batch size: 38, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:07:40,704 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63630.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:44,849 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.685e+02 2.017e+02 2.531e+02 3.696e+02, threshold=4.034e+02, percent-clipped=0.0 2023-03-26 14:07:51,118 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63647.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:07:56,392 INFO [finetune.py:976] (6/7) Epoch 12, batch 650, loss[loss=0.2357, simple_loss=0.2959, pruned_loss=0.08774, over 4863.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2607, pruned_loss=0.06447, over 921351.44 frames. ], batch size: 34, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:08:28,762 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7479, 1.6905, 1.8859, 1.1384, 1.8526, 1.8957, 1.8825, 1.4263], device='cuda:6'), covar=tensor([0.0578, 0.0758, 0.0627, 0.0926, 0.0689, 0.0640, 0.0592, 0.1256], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0133, 0.0139, 0.0123, 0.0119, 0.0141, 0.0142, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:08:29,866 INFO [finetune.py:976] (6/7) Epoch 12, batch 700, loss[loss=0.2084, simple_loss=0.2704, pruned_loss=0.07314, over 4919.00 frames. ], tot_loss[loss=0.1958, simple_loss=0.2618, pruned_loss=0.06492, over 928828.99 frames. ], batch size: 36, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:08:59,835 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.754e+02 2.049e+02 2.499e+02 4.974e+02, threshold=4.098e+02, percent-clipped=3.0 2023-03-26 14:09:04,058 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7521, 2.3669, 3.0221, 1.8795, 2.7498, 2.8747, 2.3090, 3.0800], device='cuda:6'), covar=tensor([0.1544, 0.2008, 0.1584, 0.2449, 0.0929, 0.1809, 0.2277, 0.0884], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0207, 0.0195, 0.0191, 0.0180, 0.0216, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:09:11,206 INFO [finetune.py:976] (6/7) Epoch 12, batch 750, loss[loss=0.1604, simple_loss=0.239, pruned_loss=0.04089, over 4841.00 frames. ], tot_loss[loss=0.1961, simple_loss=0.2626, pruned_loss=0.06479, over 937005.04 frames. ], batch size: 44, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:09:25,546 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:09:26,768 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63778.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:09:56,453 INFO [finetune.py:976] (6/7) Epoch 12, batch 800, loss[loss=0.2106, simple_loss=0.2722, pruned_loss=0.07455, over 4828.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2632, pruned_loss=0.06499, over 940462.85 frames. ], batch size: 47, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:10:04,898 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:10:26,005 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.587e+02 1.868e+02 2.134e+02 3.136e+02, threshold=3.736e+02, percent-clipped=1.0 2023-03-26 14:10:26,706 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2508, 2.8983, 3.0046, 3.1521, 3.0315, 2.8675, 3.2912, 0.9730], device='cuda:6'), covar=tensor([0.1047, 0.0896, 0.0967, 0.1097, 0.1540, 0.1525, 0.1081, 0.4835], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0243, 0.0275, 0.0288, 0.0329, 0.0281, 0.0299, 0.0292], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:10:32,469 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63845.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:10:38,485 INFO [finetune.py:976] (6/7) Epoch 12, batch 850, loss[loss=0.1426, simple_loss=0.2245, pruned_loss=0.03038, over 4823.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2602, pruned_loss=0.06352, over 943349.41 frames. ], batch size: 38, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:10:51,303 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63871.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:10:59,627 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=63877.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:22,732 INFO [finetune.py:976] (6/7) Epoch 12, batch 900, loss[loss=0.1837, simple_loss=0.2437, pruned_loss=0.06191, over 4835.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.2583, pruned_loss=0.06341, over 946132.91 frames. ], batch size: 47, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:11:23,446 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63906.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:29,454 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=63916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:35,894 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=63925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:35,903 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:44,049 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.021e+02 1.611e+02 1.873e+02 2.372e+02 4.297e+02, threshold=3.747e+02, percent-clipped=2.0 2023-03-26 14:11:47,184 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=63942.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:11:56,188 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.86 vs. limit=5.0 2023-03-26 14:11:56,453 INFO [finetune.py:976] (6/7) Epoch 12, batch 950, loss[loss=0.18, simple_loss=0.2533, pruned_loss=0.05339, over 4890.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2572, pruned_loss=0.06294, over 948566.31 frames. ], batch size: 32, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:12:10,887 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=63977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:12:31,104 INFO [finetune.py:976] (6/7) Epoch 12, batch 1000, loss[loss=0.2292, simple_loss=0.2932, pruned_loss=0.08257, over 4856.00 frames. ], tot_loss[loss=0.194, simple_loss=0.2595, pruned_loss=0.06421, over 950738.89 frames. ], batch size: 31, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:12:43,081 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64023.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:12:51,953 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.200e+02 1.649e+02 1.875e+02 2.259e+02 3.443e+02, threshold=3.751e+02, percent-clipped=0.0 2023-03-26 14:12:57,318 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4505, 0.9959, 0.7871, 1.3119, 1.8720, 0.6597, 1.1623, 1.3116], device='cuda:6'), covar=tensor([0.1581, 0.2354, 0.2020, 0.1372, 0.2067, 0.2218, 0.1669, 0.2157], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0093, 0.0120, 0.0095, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 14:13:04,254 INFO [finetune.py:976] (6/7) Epoch 12, batch 1050, loss[loss=0.1964, simple_loss=0.2689, pruned_loss=0.06195, over 4826.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2611, pruned_loss=0.06395, over 950824.00 frames. ], batch size: 33, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:13:17,998 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:19,235 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:22,965 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64084.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:29,465 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5850, 1.5185, 1.9160, 1.3430, 1.6648, 1.8809, 1.4156, 2.0521], device='cuda:6'), covar=tensor([0.1313, 0.2168, 0.1263, 0.1574, 0.0870, 0.1307, 0.2924, 0.0746], device='cuda:6'), in_proj_covar=tensor([0.0199, 0.0207, 0.0196, 0.0192, 0.0180, 0.0216, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:13:37,901 INFO [finetune.py:976] (6/7) Epoch 12, batch 1100, loss[loss=0.1513, simple_loss=0.2265, pruned_loss=0.03806, over 4757.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2609, pruned_loss=0.06402, over 951288.52 frames. ], batch size: 27, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:13:53,913 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64124.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:13:55,112 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:14:05,900 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.584e+02 1.925e+02 2.329e+02 4.054e+02, threshold=3.850e+02, percent-clipped=2.0 2023-03-26 14:14:17,903 INFO [finetune.py:976] (6/7) Epoch 12, batch 1150, loss[loss=0.2274, simple_loss=0.2867, pruned_loss=0.08405, over 4870.00 frames. ], tot_loss[loss=0.1951, simple_loss=0.262, pruned_loss=0.0641, over 953004.59 frames. ], batch size: 43, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:14:25,687 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:14:48,864 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64201.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:14:48,972 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-26 14:14:56,319 INFO [finetune.py:976] (6/7) Epoch 12, batch 1200, loss[loss=0.1819, simple_loss=0.2503, pruned_loss=0.05671, over 4812.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2603, pruned_loss=0.06373, over 952365.98 frames. ], batch size: 40, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:15:14,919 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:24,867 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64232.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:31,343 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.575e+02 1.833e+02 2.193e+02 5.344e+02, threshold=3.667e+02, percent-clipped=2.0 2023-03-26 14:15:34,440 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64242.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:43,192 INFO [finetune.py:976] (6/7) Epoch 12, batch 1250, loss[loss=0.1755, simple_loss=0.241, pruned_loss=0.05503, over 4197.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2582, pruned_loss=0.06333, over 951433.26 frames. ], batch size: 65, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:15:45,151 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8351, 1.6963, 1.5271, 1.8538, 2.1872, 1.7718, 1.6428, 1.5690], device='cuda:6'), covar=tensor([0.1569, 0.1659, 0.1441, 0.1371, 0.1249, 0.1072, 0.2014, 0.1479], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0207, 0.0209, 0.0189, 0.0241, 0.0182, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:15:55,114 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:15:55,714 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:05,690 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-26 14:16:09,195 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:11,101 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:27,223 INFO [finetune.py:976] (6/7) Epoch 12, batch 1300, loss[loss=0.1479, simple_loss=0.2167, pruned_loss=0.03954, over 4764.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2548, pruned_loss=0.0622, over 952853.84 frames. ], batch size: 26, lr: 3.66e-03, grad_scale: 16.0 2023-03-26 14:16:48,477 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.610e+02 1.842e+02 2.244e+02 4.381e+02, threshold=3.684e+02, percent-clipped=1.0 2023-03-26 14:16:50,743 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 14:16:53,412 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64345.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:16:59,917 INFO [finetune.py:976] (6/7) Epoch 12, batch 1350, loss[loss=0.2901, simple_loss=0.335, pruned_loss=0.1226, over 4141.00 frames. ], tot_loss[loss=0.1906, simple_loss=0.2554, pruned_loss=0.06288, over 952951.88 frames. ], batch size: 65, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:17:05,981 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2685, 3.6632, 3.9062, 4.0768, 4.0175, 3.7320, 4.3465, 1.3868], device='cuda:6'), covar=tensor([0.0769, 0.0806, 0.0810, 0.0844, 0.1189, 0.1673, 0.0710, 0.5619], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0242, 0.0275, 0.0289, 0.0330, 0.0281, 0.0300, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:17:13,089 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 14:17:16,047 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64379.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:17:25,043 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3644, 2.1693, 1.7033, 2.3262, 2.2260, 1.9949, 2.5646, 2.3128], device='cuda:6'), covar=tensor([0.1313, 0.2388, 0.3487, 0.2691, 0.2741, 0.1767, 0.3284, 0.2047], device='cuda:6'), in_proj_covar=tensor([0.0177, 0.0188, 0.0234, 0.0255, 0.0241, 0.0198, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:17:33,432 INFO [finetune.py:976] (6/7) Epoch 12, batch 1400, loss[loss=0.2247, simple_loss=0.2944, pruned_loss=0.07749, over 4816.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2591, pruned_loss=0.06391, over 954451.09 frames. ], batch size: 30, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:17:34,209 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64406.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:17:46,547 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3148, 2.0624, 1.7150, 0.8283, 1.9438, 1.7187, 1.3048, 1.8772], device='cuda:6'), covar=tensor([0.0704, 0.1035, 0.1461, 0.2351, 0.1361, 0.2358, 0.2906, 0.1092], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0200, 0.0203, 0.0187, 0.0215, 0.0210, 0.0224, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:17:54,256 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.617e+02 1.936e+02 2.295e+02 3.610e+02, threshold=3.872e+02, percent-clipped=0.0 2023-03-26 14:18:06,657 INFO [finetune.py:976] (6/7) Epoch 12, batch 1450, loss[loss=0.2311, simple_loss=0.2878, pruned_loss=0.08721, over 4775.00 frames. ], tot_loss[loss=0.1948, simple_loss=0.2607, pruned_loss=0.06444, over 951382.10 frames. ], batch size: 51, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:18:08,589 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5563, 1.9656, 1.5539, 1.5630, 2.1070, 2.0492, 1.9401, 1.8304], device='cuda:6'), covar=tensor([0.0515, 0.0333, 0.0618, 0.0400, 0.0354, 0.0604, 0.0321, 0.0425], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0109, 0.0140, 0.0114, 0.0102, 0.0104, 0.0094, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.2042e-05, 8.4593e-05, 1.1103e-04, 8.8708e-05, 7.9372e-05, 7.7137e-05, 7.0808e-05, 8.3692e-05], device='cuda:6') 2023-03-26 14:18:13,330 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64465.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:13,920 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:37,420 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:39,732 INFO [finetune.py:976] (6/7) Epoch 12, batch 1500, loss[loss=0.1774, simple_loss=0.2555, pruned_loss=0.04967, over 4915.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.262, pruned_loss=0.06451, over 953154.86 frames. ], batch size: 38, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:18:46,093 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64514.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:18:54,911 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64526.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:01,494 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.738e+02 2.083e+02 2.672e+02 4.064e+02, threshold=4.165e+02, percent-clipped=1.0 2023-03-26 14:19:15,862 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64549.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:19,445 INFO [finetune.py:976] (6/7) Epoch 12, batch 1550, loss[loss=0.1678, simple_loss=0.2393, pruned_loss=0.04813, over 4854.00 frames. ], tot_loss[loss=0.1959, simple_loss=0.263, pruned_loss=0.06447, over 954955.47 frames. ], batch size: 31, lr: 3.66e-03, grad_scale: 32.0 2023-03-26 14:19:33,896 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64572.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:45,159 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:45,837 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64589.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:19:56,452 INFO [finetune.py:976] (6/7) Epoch 12, batch 1600, loss[loss=0.2281, simple_loss=0.2906, pruned_loss=0.0828, over 4909.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2605, pruned_loss=0.06403, over 953738.62 frames. ], batch size: 36, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:20:05,803 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5658, 1.4837, 1.3988, 1.5463, 1.1383, 3.2880, 1.2973, 1.6801], device='cuda:6'), covar=tensor([0.4073, 0.3075, 0.2515, 0.2810, 0.1883, 0.0275, 0.2591, 0.1319], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:20:12,895 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64620.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:20:30,240 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.633e+02 1.922e+02 2.431e+02 4.177e+02, threshold=3.845e+02, percent-clipped=1.0 2023-03-26 14:20:41,560 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64648.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:20:43,314 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:20:49,614 INFO [finetune.py:976] (6/7) Epoch 12, batch 1650, loss[loss=0.2137, simple_loss=0.259, pruned_loss=0.0842, over 4298.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2576, pruned_loss=0.06268, over 955472.65 frames. ], batch size: 18, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:21:01,599 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1979, 2.0998, 2.2887, 1.6809, 2.2244, 2.4430, 2.3594, 1.4511], device='cuda:6'), covar=tensor([0.0661, 0.0731, 0.0698, 0.0953, 0.0575, 0.0691, 0.0675, 0.1717], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0134, 0.0142, 0.0125, 0.0122, 0.0143, 0.0144, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:21:05,227 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64679.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:19,529 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:21,791 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64703.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:22,890 INFO [finetune.py:976] (6/7) Epoch 12, batch 1700, loss[loss=0.1617, simple_loss=0.2314, pruned_loss=0.046, over 4832.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2554, pruned_loss=0.06188, over 956242.64 frames. ], batch size: 33, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:21:27,404 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64709.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:36,643 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.75 vs. limit=5.0 2023-03-26 14:21:46,828 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64727.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:21:53,448 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.591e+02 1.930e+02 2.225e+02 5.420e+02, threshold=3.861e+02, percent-clipped=2.0 2023-03-26 14:21:58,921 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64745.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:03,112 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64752.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:05,307 INFO [finetune.py:976] (6/7) Epoch 12, batch 1750, loss[loss=0.2323, simple_loss=0.2932, pruned_loss=0.08564, over 4818.00 frames. ], tot_loss[loss=0.1912, simple_loss=0.2571, pruned_loss=0.06263, over 956399.04 frames. ], batch size: 45, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:22:10,841 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64764.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:29,524 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1916, 1.9625, 2.6255, 1.5089, 2.3347, 2.5328, 1.8562, 2.5897], device='cuda:6'), covar=tensor([0.1469, 0.2160, 0.1867, 0.2506, 0.1031, 0.1681, 0.2796, 0.1004], device='cuda:6'), in_proj_covar=tensor([0.0198, 0.0207, 0.0196, 0.0193, 0.0179, 0.0216, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:22:38,078 INFO [finetune.py:976] (6/7) Epoch 12, batch 1800, loss[loss=0.16, simple_loss=0.2384, pruned_loss=0.04087, over 4739.00 frames. ], tot_loss[loss=0.1943, simple_loss=0.2609, pruned_loss=0.06386, over 956990.26 frames. ], batch size: 27, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:22:38,806 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64806.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:43,532 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=64813.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:48,352 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:22:58,985 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.622e+02 1.968e+02 2.271e+02 4.247e+02, threshold=3.936e+02, percent-clipped=1.0 2023-03-26 14:23:11,388 INFO [finetune.py:976] (6/7) Epoch 12, batch 1850, loss[loss=0.2123, simple_loss=0.2674, pruned_loss=0.07857, over 4733.00 frames. ], tot_loss[loss=0.197, simple_loss=0.2633, pruned_loss=0.06533, over 955827.14 frames. ], batch size: 54, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:23:31,523 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-26 14:23:33,496 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=64888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:23:42,262 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0363, 1.8986, 2.4073, 1.5834, 2.0842, 2.4386, 1.8510, 2.4826], device='cuda:6'), covar=tensor([0.1459, 0.2123, 0.1368, 0.1971, 0.1051, 0.1385, 0.2461, 0.0922], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0206, 0.0195, 0.0192, 0.0179, 0.0215, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:23:45,140 INFO [finetune.py:976] (6/7) Epoch 12, batch 1900, loss[loss=0.2197, simple_loss=0.2894, pruned_loss=0.07499, over 4858.00 frames. ], tot_loss[loss=0.1971, simple_loss=0.2639, pruned_loss=0.0652, over 954983.59 frames. ], batch size: 31, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:24:06,070 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=64936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:06,595 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.653e+02 1.911e+02 2.364e+02 4.358e+02, threshold=3.822e+02, percent-clipped=3.0 2023-03-26 14:24:06,756 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8678, 1.7201, 1.5988, 2.0568, 2.4216, 1.9386, 1.6742, 1.5317], device='cuda:6'), covar=tensor([0.1945, 0.2020, 0.1717, 0.1418, 0.1465, 0.1126, 0.2205, 0.1800], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0210, 0.0190, 0.0242, 0.0183, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:24:12,015 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=64945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:18,922 INFO [finetune.py:976] (6/7) Epoch 12, batch 1950, loss[loss=0.142, simple_loss=0.2235, pruned_loss=0.03018, over 4789.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2621, pruned_loss=0.06445, over 956438.06 frames. ], batch size: 29, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:24:40,808 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=64976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:58,117 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65001.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:24:59,920 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65004.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:00,452 INFO [finetune.py:976] (6/7) Epoch 12, batch 2000, loss[loss=0.2001, simple_loss=0.2572, pruned_loss=0.07147, over 4232.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2593, pruned_loss=0.06371, over 955272.07 frames. ], batch size: 65, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:25:21,641 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.609e+02 1.880e+02 2.233e+02 7.388e+02, threshold=3.760e+02, percent-clipped=1.0 2023-03-26 14:25:21,792 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65037.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 14:25:34,385 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65049.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:40,658 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8735, 2.4680, 2.3508, 1.3413, 2.6119, 2.1788, 1.9432, 2.3221], device='cuda:6'), covar=tensor([0.1219, 0.0930, 0.1812, 0.2079, 0.1674, 0.1916, 0.2169, 0.1147], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0199, 0.0201, 0.0185, 0.0215, 0.0207, 0.0223, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:25:42,373 INFO [finetune.py:976] (6/7) Epoch 12, batch 2050, loss[loss=0.153, simple_loss=0.213, pruned_loss=0.04647, over 4819.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2564, pruned_loss=0.06319, over 955650.14 frames. ], batch size: 25, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:25:42,503 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:45,411 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65059.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:25:55,220 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-26 14:26:21,868 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65101.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:24,691 INFO [finetune.py:976] (6/7) Epoch 12, batch 2100, loss[loss=0.1816, simple_loss=0.249, pruned_loss=0.0571, over 4755.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2554, pruned_loss=0.06295, over 955743.13 frames. ], batch size: 28, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:26:27,108 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65108.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:32,508 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:35,471 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65121.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:26:52,594 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.674e+02 1.985e+02 2.398e+02 5.597e+02, threshold=3.971e+02, percent-clipped=1.0 2023-03-26 14:26:59,362 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1756, 1.2077, 1.2434, 0.7554, 1.1425, 1.4923, 1.5129, 1.2195], device='cuda:6'), covar=tensor([0.0975, 0.0587, 0.0537, 0.0564, 0.0507, 0.0589, 0.0318, 0.0706], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0121, 0.0131, 0.0130, 0.0126, 0.0143, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3040e-05, 1.1136e-04, 8.7117e-05, 9.4722e-05, 9.2457e-05, 9.1241e-05, 1.0450e-04, 1.0605e-04], device='cuda:6') 2023-03-26 14:27:08,170 INFO [finetune.py:976] (6/7) Epoch 12, batch 2150, loss[loss=0.206, simple_loss=0.2899, pruned_loss=0.06104, over 4814.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2586, pruned_loss=0.06411, over 954464.17 frames. ], batch size: 40, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:27:17,730 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:27:41,484 INFO [finetune.py:976] (6/7) Epoch 12, batch 2200, loss[loss=0.1544, simple_loss=0.228, pruned_loss=0.04042, over 4744.00 frames. ], tot_loss[loss=0.1964, simple_loss=0.2619, pruned_loss=0.06544, over 952558.18 frames. ], batch size: 27, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:27:46,519 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 14:28:03,264 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.670e+02 2.055e+02 2.491e+02 4.530e+02, threshold=4.111e+02, percent-clipped=2.0 2023-03-26 14:28:08,800 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:15,258 INFO [finetune.py:976] (6/7) Epoch 12, batch 2250, loss[loss=0.1509, simple_loss=0.2265, pruned_loss=0.03758, over 4834.00 frames. ], tot_loss[loss=0.1989, simple_loss=0.2648, pruned_loss=0.06654, over 954499.48 frames. ], batch size: 30, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:28:25,349 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.51 vs. limit=5.0 2023-03-26 14:28:35,987 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65285.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:41,278 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:42,625 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 14:28:48,516 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65304.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:28:49,048 INFO [finetune.py:976] (6/7) Epoch 12, batch 2300, loss[loss=0.1728, simple_loss=0.2446, pruned_loss=0.05057, over 4910.00 frames. ], tot_loss[loss=0.198, simple_loss=0.2645, pruned_loss=0.0657, over 954119.72 frames. ], batch size: 36, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:29:07,490 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65332.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:29:10,445 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.717e+01 1.673e+02 1.949e+02 2.267e+02 6.743e+02, threshold=3.897e+02, percent-clipped=1.0 2023-03-26 14:29:16,547 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:29:20,084 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65352.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:29:22,340 INFO [finetune.py:976] (6/7) Epoch 12, batch 2350, loss[loss=0.2261, simple_loss=0.2848, pruned_loss=0.0837, over 4872.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2606, pruned_loss=0.06419, over 954317.27 frames. ], batch size: 31, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:29:24,829 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65359.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:02,658 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65401.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:04,997 INFO [finetune.py:976] (6/7) Epoch 12, batch 2400, loss[loss=0.2128, simple_loss=0.2715, pruned_loss=0.07707, over 4871.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.2578, pruned_loss=0.06356, over 953271.78 frames. ], batch size: 34, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:30:06,755 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65407.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:07,418 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:09,183 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65411.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:26,324 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.174e+02 1.534e+02 1.885e+02 2.327e+02 5.518e+02, threshold=3.771e+02, percent-clipped=1.0 2023-03-26 14:30:31,090 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6180, 1.4673, 1.4468, 1.4931, 1.0907, 3.2998, 1.3025, 1.7373], device='cuda:6'), covar=tensor([0.3377, 0.2466, 0.2118, 0.2405, 0.1853, 0.0203, 0.2764, 0.1305], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0116, 0.0121, 0.0124, 0.0116, 0.0099, 0.0099, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:30:34,677 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65449.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:30:38,806 INFO [finetune.py:976] (6/7) Epoch 12, batch 2450, loss[loss=0.2005, simple_loss=0.2651, pruned_loss=0.06795, over 4827.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2553, pruned_loss=0.06299, over 952775.20 frames. ], batch size: 47, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:30:39,470 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:31:01,155 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7540, 2.4130, 1.8812, 1.2196, 2.1697, 2.2446, 2.0512, 2.1484], device='cuda:6'), covar=tensor([0.0678, 0.0764, 0.1460, 0.1799, 0.1326, 0.1753, 0.1840, 0.0927], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0200, 0.0201, 0.0186, 0.0216, 0.0209, 0.0224, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:31:02,946 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3318, 1.3036, 1.3688, 1.6132, 1.5149, 2.8471, 1.2722, 1.4303], device='cuda:6'), covar=tensor([0.1007, 0.1808, 0.1320, 0.0949, 0.1627, 0.0306, 0.1553, 0.1812], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0078, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:31:31,331 INFO [finetune.py:976] (6/7) Epoch 12, batch 2500, loss[loss=0.1977, simple_loss=0.2614, pruned_loss=0.06707, over 4754.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2552, pruned_loss=0.06255, over 950389.51 frames. ], batch size: 26, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:31:40,956 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6665, 3.9077, 3.5975, 1.8325, 3.9923, 3.0481, 0.7809, 2.6536], device='cuda:6'), covar=tensor([0.2405, 0.2122, 0.1541, 0.3359, 0.0976, 0.0955, 0.4551, 0.1583], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0173, 0.0158, 0.0127, 0.0155, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 14:31:49,231 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65532.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:31:52,645 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.473e+01 1.650e+02 2.020e+02 2.341e+02 4.049e+02, threshold=4.040e+02, percent-clipped=1.0 2023-03-26 14:31:54,596 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0295, 0.9037, 0.9074, 1.0692, 1.1933, 1.1005, 1.0105, 0.9647], device='cuda:6'), covar=tensor([0.0355, 0.0302, 0.0624, 0.0310, 0.0311, 0.0481, 0.0315, 0.0400], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0107, 0.0138, 0.0113, 0.0101, 0.0103, 0.0092, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.0579e-05, 8.3360e-05, 1.0978e-04, 8.7984e-05, 7.9047e-05, 7.6249e-05, 6.9640e-05, 8.2458e-05], device='cuda:6') 2023-03-26 14:32:06,905 INFO [finetune.py:976] (6/7) Epoch 12, batch 2550, loss[loss=0.1642, simple_loss=0.2276, pruned_loss=0.0504, over 4771.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2595, pruned_loss=0.0636, over 951700.24 frames. ], batch size: 26, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:32:17,300 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.58 vs. limit=5.0 2023-03-26 14:32:40,939 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65593.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:32:48,714 INFO [finetune.py:976] (6/7) Epoch 12, batch 2600, loss[loss=0.1887, simple_loss=0.2594, pruned_loss=0.05897, over 4697.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2603, pruned_loss=0.06435, over 952120.66 frames. ], batch size: 23, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:32:48,976 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 14:32:58,049 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.45 vs. limit=5.0 2023-03-26 14:33:00,835 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 14:33:04,292 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2192, 2.7598, 2.7171, 1.2869, 2.9074, 2.0920, 0.6573, 1.9292], device='cuda:6'), covar=tensor([0.2241, 0.2113, 0.1755, 0.3442, 0.1231, 0.1288, 0.4248, 0.1741], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0174, 0.0159, 0.0128, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 14:33:06,605 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65632.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:33:10,009 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.524e+01 1.645e+02 2.049e+02 2.494e+02 4.393e+02, threshold=4.097e+02, percent-clipped=1.0 2023-03-26 14:33:12,521 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:33:22,367 INFO [finetune.py:976] (6/7) Epoch 12, batch 2650, loss[loss=0.2099, simple_loss=0.2747, pruned_loss=0.07255, over 4914.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2598, pruned_loss=0.06384, over 949313.18 frames. ], batch size: 38, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:33:38,618 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:33:49,076 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 14:33:55,608 INFO [finetune.py:976] (6/7) Epoch 12, batch 2700, loss[loss=0.2204, simple_loss=0.2778, pruned_loss=0.08148, over 4822.00 frames. ], tot_loss[loss=0.1923, simple_loss=0.2585, pruned_loss=0.06308, over 948893.81 frames. ], batch size: 33, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:33:59,757 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65711.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:34:17,010 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.547e+02 1.884e+02 2.200e+02 3.210e+02, threshold=3.769e+02, percent-clipped=0.0 2023-03-26 14:34:30,256 INFO [finetune.py:976] (6/7) Epoch 12, batch 2750, loss[loss=0.2044, simple_loss=0.2626, pruned_loss=0.07308, over 4832.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2566, pruned_loss=0.06272, over 949614.49 frames. ], batch size: 41, lr: 3.65e-03, grad_scale: 32.0 2023-03-26 14:34:38,468 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65759.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:35:30,775 INFO [finetune.py:976] (6/7) Epoch 12, batch 2800, loss[loss=0.1809, simple_loss=0.2397, pruned_loss=0.06106, over 4922.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2533, pruned_loss=0.0615, over 950874.67 frames. ], batch size: 37, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:35:39,091 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0287, 1.8089, 2.4032, 4.0976, 2.8622, 2.8317, 0.8375, 3.3098], device='cuda:6'), covar=tensor([0.1709, 0.1495, 0.1498, 0.0531, 0.0716, 0.1537, 0.2017, 0.0450], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0116, 0.0136, 0.0166, 0.0101, 0.0139, 0.0127, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 14:35:51,906 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 14:35:52,227 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.914e+01 1.578e+02 1.887e+02 2.176e+02 5.167e+02, threshold=3.774e+02, percent-clipped=1.0 2023-03-26 14:35:52,989 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=65838.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:35:53,541 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3851, 1.4014, 2.0691, 1.7687, 1.7673, 3.6933, 1.3973, 1.7962], device='cuda:6'), covar=tensor([0.1022, 0.1729, 0.1285, 0.1033, 0.1489, 0.0287, 0.1590, 0.1636], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:35:55,175 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 14:36:04,205 INFO [finetune.py:976] (6/7) Epoch 12, batch 2850, loss[loss=0.1612, simple_loss=0.2252, pruned_loss=0.04864, over 4717.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2514, pruned_loss=0.06115, over 949713.76 frames. ], batch size: 23, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:36:06,706 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 14:36:30,382 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1439, 1.8017, 1.9116, 0.8654, 1.9933, 2.2388, 1.9912, 1.8127], device='cuda:6'), covar=tensor([0.0960, 0.0704, 0.0514, 0.0757, 0.0496, 0.0631, 0.0468, 0.0690], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0154, 0.0123, 0.0133, 0.0132, 0.0127, 0.0145, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.4235e-05, 1.1301e-04, 8.8473e-05, 9.5749e-05, 9.3577e-05, 9.2449e-05, 1.0561e-04, 1.0760e-04], device='cuda:6') 2023-03-26 14:36:36,968 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=65888.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:36:38,732 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0599, 1.6756, 1.8723, 1.8825, 1.6175, 1.6539, 1.8237, 1.7001], device='cuda:6'), covar=tensor([0.5785, 0.5628, 0.4960, 0.5799, 0.6997, 0.5465, 0.7214, 0.4831], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0237, 0.0254, 0.0259, 0.0256, 0.0230, 0.0273, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:36:45,241 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=65899.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:36:48,785 INFO [finetune.py:976] (6/7) Epoch 12, batch 2900, loss[loss=0.212, simple_loss=0.2943, pruned_loss=0.06484, over 4824.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2556, pruned_loss=0.06257, over 950240.77 frames. ], batch size: 51, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:37:10,264 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.527e+02 1.842e+02 2.377e+02 4.547e+02, threshold=3.684e+02, percent-clipped=3.0 2023-03-26 14:37:12,788 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=65941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:37:20,118 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 14:37:27,698 INFO [finetune.py:976] (6/7) Epoch 12, batch 2950, loss[loss=0.2004, simple_loss=0.2664, pruned_loss=0.06719, over 4898.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2583, pruned_loss=0.0632, over 950319.21 frames. ], batch size: 36, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:37:43,487 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2626, 3.6544, 3.8852, 4.0385, 4.0396, 3.8272, 4.3579, 1.3802], device='cuda:6'), covar=tensor([0.0797, 0.0829, 0.0730, 0.1050, 0.1156, 0.1457, 0.0636, 0.5506], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0242, 0.0275, 0.0292, 0.0329, 0.0282, 0.0300, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:38:02,431 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=65989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:38:20,406 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 14:38:26,401 INFO [finetune.py:976] (6/7) Epoch 12, batch 3000, loss[loss=0.2107, simple_loss=0.2678, pruned_loss=0.07679, over 4714.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2607, pruned_loss=0.06412, over 952637.80 frames. ], batch size: 23, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:38:26,402 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 14:38:28,780 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8429, 1.7282, 1.7079, 1.6921, 1.1344, 3.0659, 1.2205, 1.7372], device='cuda:6'), covar=tensor([0.3193, 0.2103, 0.1864, 0.2297, 0.1684, 0.0250, 0.2342, 0.1166], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:38:32,571 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8379, 1.3019, 0.8912, 1.5858, 2.1207, 1.1926, 1.5389, 1.6353], device='cuda:6'), covar=tensor([0.1515, 0.2019, 0.2068, 0.1266, 0.2120, 0.2178, 0.1509, 0.2107], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0092, 0.0120, 0.0094, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 14:38:37,072 INFO [finetune.py:1010] (6/7) Epoch 12, validation: loss=0.1571, simple_loss=0.2281, pruned_loss=0.04309, over 2265189.00 frames. 2023-03-26 14:38:37,073 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 14:38:49,201 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=2.08 vs. limit=2.0 2023-03-26 14:38:58,512 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.619e+02 1.943e+02 2.343e+02 4.325e+02, threshold=3.886e+02, percent-clipped=3.0 2023-03-26 14:39:21,449 INFO [finetune.py:976] (6/7) Epoch 12, batch 3050, loss[loss=0.1788, simple_loss=0.2557, pruned_loss=0.05099, over 4804.00 frames. ], tot_loss[loss=0.1936, simple_loss=0.2604, pruned_loss=0.06339, over 952347.54 frames. ], batch size: 39, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:40:06,363 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3586, 2.1703, 1.8469, 2.4236, 2.1364, 2.1252, 2.0727, 3.0241], device='cuda:6'), covar=tensor([0.4178, 0.5698, 0.3874, 0.4572, 0.4889, 0.2668, 0.5316, 0.1794], device='cuda:6'), in_proj_covar=tensor([0.0282, 0.0258, 0.0222, 0.0274, 0.0242, 0.0209, 0.0245, 0.0216], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:40:36,354 INFO [finetune.py:976] (6/7) Epoch 12, batch 3100, loss[loss=0.183, simple_loss=0.2391, pruned_loss=0.06347, over 4776.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2589, pruned_loss=0.06294, over 950231.45 frames. ], batch size: 26, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:41:19,784 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.628e+02 1.972e+02 2.398e+02 4.316e+02, threshold=3.945e+02, percent-clipped=3.0 2023-03-26 14:41:22,277 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2348, 2.0851, 2.1441, 0.9886, 2.2991, 2.5577, 2.1608, 2.0195], device='cuda:6'), covar=tensor([0.0863, 0.0609, 0.0433, 0.0653, 0.0551, 0.0441, 0.0494, 0.0613], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0154, 0.0123, 0.0132, 0.0131, 0.0127, 0.0145, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.3960e-05, 1.1254e-04, 8.8647e-05, 9.5535e-05, 9.3301e-05, 9.2046e-05, 1.0549e-04, 1.0728e-04], device='cuda:6') 2023-03-26 14:41:26,920 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9877, 1.7595, 2.4212, 1.5226, 2.1746, 2.2926, 1.6816, 2.4157], device='cuda:6'), covar=tensor([0.1558, 0.2186, 0.1543, 0.2230, 0.0860, 0.1627, 0.2893, 0.0913], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0205, 0.0193, 0.0190, 0.0178, 0.0214, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:41:27,547 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:41:40,146 INFO [finetune.py:976] (6/7) Epoch 12, batch 3150, loss[loss=0.1959, simple_loss=0.2593, pruned_loss=0.06623, over 4820.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2569, pruned_loss=0.06307, over 950803.44 frames. ], batch size: 33, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:41:48,972 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66160.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:42:04,556 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2006, 2.0183, 1.8207, 2.0859, 2.1751, 1.9117, 2.4683, 2.2154], device='cuda:6'), covar=tensor([0.1358, 0.2378, 0.2815, 0.2664, 0.2409, 0.1574, 0.3264, 0.1698], device='cuda:6'), in_proj_covar=tensor([0.0177, 0.0187, 0.0232, 0.0254, 0.0241, 0.0197, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:42:24,559 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66188.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:42:33,445 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66194.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:42:35,592 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-26 14:42:40,453 INFO [finetune.py:976] (6/7) Epoch 12, batch 3200, loss[loss=0.1705, simple_loss=0.2348, pruned_loss=0.05308, over 4825.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2532, pruned_loss=0.06122, over 953330.36 frames. ], batch size: 39, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:42:40,570 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66205.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:42:51,211 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66221.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 14:43:01,312 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66236.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:43:01,824 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.602e+02 1.878e+02 2.419e+02 4.134e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-26 14:43:02,320 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 14:43:14,216 INFO [finetune.py:976] (6/7) Epoch 12, batch 3250, loss[loss=0.2258, simple_loss=0.2958, pruned_loss=0.07793, over 4910.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2563, pruned_loss=0.06318, over 953323.89 frames. ], batch size: 35, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:43:16,207 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66258.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:43:48,268 INFO [finetune.py:976] (6/7) Epoch 12, batch 3300, loss[loss=0.2214, simple_loss=0.3022, pruned_loss=0.0703, over 4844.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2599, pruned_loss=0.06423, over 952114.72 frames. ], batch size: 49, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:43:52,502 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 14:43:57,493 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66319.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:44:01,421 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.72 vs. limit=5.0 2023-03-26 14:44:14,652 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.247e+02 1.642e+02 1.904e+02 2.370e+02 4.024e+02, threshold=3.808e+02, percent-clipped=2.0 2023-03-26 14:44:29,724 INFO [finetune.py:976] (6/7) Epoch 12, batch 3350, loss[loss=0.1692, simple_loss=0.2375, pruned_loss=0.05048, over 4799.00 frames. ], tot_loss[loss=0.1962, simple_loss=0.2626, pruned_loss=0.06495, over 954275.65 frames. ], batch size: 26, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:44:33,342 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4073, 1.2994, 1.6246, 2.4717, 1.6719, 2.2087, 0.8819, 2.0691], device='cuda:6'), covar=tensor([0.1766, 0.1488, 0.1227, 0.0742, 0.0957, 0.1157, 0.1648, 0.0680], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0117, 0.0137, 0.0167, 0.0102, 0.0140, 0.0128, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 14:44:50,666 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0449, 1.9561, 1.8771, 2.2123, 2.4948, 2.1369, 2.0155, 1.5634], device='cuda:6'), covar=tensor([0.2268, 0.2076, 0.1898, 0.1545, 0.2130, 0.1194, 0.2219, 0.2078], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0207, 0.0210, 0.0190, 0.0241, 0.0183, 0.0212, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:45:02,613 INFO [finetune.py:976] (6/7) Epoch 12, batch 3400, loss[loss=0.2019, simple_loss=0.2805, pruned_loss=0.06165, over 4076.00 frames. ], tot_loss[loss=0.1974, simple_loss=0.2639, pruned_loss=0.06548, over 954023.99 frames. ], batch size: 65, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:45:03,230 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3937, 2.1992, 1.9190, 0.9324, 2.2271, 1.8539, 1.6018, 1.9418], device='cuda:6'), covar=tensor([0.0934, 0.1004, 0.1687, 0.2145, 0.1515, 0.2148, 0.2326, 0.1253], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0199, 0.0201, 0.0185, 0.0215, 0.0207, 0.0223, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:45:24,437 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.693e+02 1.994e+02 2.432e+02 3.824e+02, threshold=3.988e+02, percent-clipped=2.0 2023-03-26 14:45:25,847 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8639, 1.7531, 1.5717, 1.9417, 2.4881, 2.0110, 1.5945, 1.4694], device='cuda:6'), covar=tensor([0.2279, 0.2156, 0.1991, 0.1704, 0.1668, 0.1170, 0.2441, 0.1946], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0208, 0.0211, 0.0190, 0.0242, 0.0183, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:45:36,093 INFO [finetune.py:976] (6/7) Epoch 12, batch 3450, loss[loss=0.1817, simple_loss=0.2408, pruned_loss=0.06128, over 4680.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.262, pruned_loss=0.06429, over 953844.04 frames. ], batch size: 23, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:45:40,986 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 14:46:02,778 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66494.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:46:06,887 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66500.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:46:09,848 INFO [finetune.py:976] (6/7) Epoch 12, batch 3500, loss[loss=0.1708, simple_loss=0.2289, pruned_loss=0.05639, over 4894.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2589, pruned_loss=0.06351, over 953705.22 frames. ], batch size: 35, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:46:20,799 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66516.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:46:23,834 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66521.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:46:28,110 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6996, 0.7091, 1.7321, 1.5837, 1.5298, 1.4560, 1.5177, 1.6021], device='cuda:6'), covar=tensor([0.3391, 0.3879, 0.3287, 0.3403, 0.4553, 0.3322, 0.4033, 0.3154], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0239, 0.0255, 0.0261, 0.0258, 0.0233, 0.0275, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:46:36,428 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.655e+02 1.937e+02 2.486e+02 6.010e+02, threshold=3.875e+02, percent-clipped=2.0 2023-03-26 14:46:40,005 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66542.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:46:57,615 INFO [finetune.py:976] (6/7) Epoch 12, batch 3550, loss[loss=0.1848, simple_loss=0.2531, pruned_loss=0.05829, over 4841.00 frames. ], tot_loss[loss=0.1916, simple_loss=0.2562, pruned_loss=0.06344, over 952683.72 frames. ], batch size: 47, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:47:07,513 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3189, 2.2916, 2.3407, 1.5532, 2.3941, 2.5396, 2.4700, 2.0540], device='cuda:6'), covar=tensor([0.0602, 0.0618, 0.0712, 0.0959, 0.0613, 0.0620, 0.0561, 0.0966], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0133, 0.0141, 0.0123, 0.0122, 0.0142, 0.0141, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:47:18,794 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8096, 1.0579, 1.8419, 1.7431, 1.5879, 1.5713, 1.6264, 1.7235], device='cuda:6'), covar=tensor([0.3980, 0.4410, 0.3528, 0.3778, 0.4855, 0.3819, 0.4504, 0.3382], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0239, 0.0255, 0.0261, 0.0258, 0.0233, 0.0275, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:47:23,065 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66582.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:47:43,651 INFO [finetune.py:976] (6/7) Epoch 12, batch 3600, loss[loss=0.2929, simple_loss=0.3316, pruned_loss=0.1271, over 4156.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2544, pruned_loss=0.06267, over 953037.85 frames. ], batch size: 65, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:47:53,199 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-26 14:47:53,907 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66614.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:48:03,971 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6778, 1.2926, 0.8495, 1.6199, 2.0180, 1.2792, 1.4158, 1.5031], device='cuda:6'), covar=tensor([0.2035, 0.2907, 0.2614, 0.1672, 0.2426, 0.2556, 0.2112, 0.3040], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0112, 0.0092, 0.0120, 0.0094, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 14:48:06,739 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 14:48:08,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.581e+02 1.999e+02 2.430e+02 3.919e+02, threshold=3.999e+02, percent-clipped=1.0 2023-03-26 14:48:09,425 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8051, 3.8652, 3.5892, 1.7768, 3.9392, 3.0075, 0.7583, 2.7083], device='cuda:6'), covar=tensor([0.2292, 0.2120, 0.1600, 0.3679, 0.1041, 0.0981, 0.4825, 0.1611], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0155, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 14:48:18,777 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0421, 1.6869, 2.5752, 3.9356, 2.6549, 2.6944, 1.5476, 3.2229], device='cuda:6'), covar=tensor([0.1789, 0.1608, 0.1293, 0.0610, 0.0817, 0.1694, 0.1497, 0.0454], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0116, 0.0134, 0.0165, 0.0101, 0.0138, 0.0126, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 14:48:21,112 INFO [finetune.py:976] (6/7) Epoch 12, batch 3650, loss[loss=0.1779, simple_loss=0.2498, pruned_loss=0.05296, over 4868.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2561, pruned_loss=0.06303, over 953540.47 frames. ], batch size: 31, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:48:49,562 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7403, 3.4210, 3.4234, 1.8445, 3.6550, 2.7841, 1.2797, 2.5427], device='cuda:6'), covar=tensor([0.3380, 0.1826, 0.1340, 0.2870, 0.0918, 0.0882, 0.3549, 0.1303], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0173, 0.0159, 0.0128, 0.0155, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 14:48:49,766 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.99 vs. limit=2.0 2023-03-26 14:48:51,128 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 14:48:54,890 INFO [finetune.py:976] (6/7) Epoch 12, batch 3700, loss[loss=0.1856, simple_loss=0.2508, pruned_loss=0.06026, over 4769.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2596, pruned_loss=0.06405, over 953990.44 frames. ], batch size: 28, lr: 3.64e-03, grad_scale: 64.0 2023-03-26 14:49:14,944 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66731.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:49:18,458 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.675e+02 1.999e+02 2.466e+02 3.717e+02, threshold=3.997e+02, percent-clipped=0.0 2023-03-26 14:49:38,724 INFO [finetune.py:976] (6/7) Epoch 12, batch 3750, loss[loss=0.2169, simple_loss=0.2837, pruned_loss=0.07503, over 4906.00 frames. ], tot_loss[loss=0.1956, simple_loss=0.2615, pruned_loss=0.06488, over 951235.72 frames. ], batch size: 46, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:50:03,075 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=66792.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 14:50:03,121 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 14:50:06,649 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7505, 2.4692, 2.3404, 1.3472, 2.4537, 2.0191, 1.9194, 2.1667], device='cuda:6'), covar=tensor([0.1317, 0.0816, 0.1807, 0.2096, 0.1652, 0.2222, 0.2077, 0.1266], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0197, 0.0199, 0.0183, 0.0213, 0.0206, 0.0221, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:50:08,360 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 14:50:09,003 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66800.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:50:12,433 INFO [finetune.py:976] (6/7) Epoch 12, batch 3800, loss[loss=0.1896, simple_loss=0.2594, pruned_loss=0.0599, over 4857.00 frames. ], tot_loss[loss=0.1973, simple_loss=0.2632, pruned_loss=0.06566, over 951589.56 frames. ], batch size: 44, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:50:14,418 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4412, 1.5107, 1.4915, 0.8068, 1.6009, 1.7691, 1.8838, 1.4063], device='cuda:6'), covar=tensor([0.0933, 0.0605, 0.0506, 0.0581, 0.0435, 0.0525, 0.0280, 0.0681], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0153, 0.0123, 0.0132, 0.0130, 0.0127, 0.0144, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3861e-05, 1.1219e-04, 8.9012e-05, 9.5337e-05, 9.2351e-05, 9.2062e-05, 1.0486e-04, 1.0685e-04], device='cuda:6') 2023-03-26 14:50:16,237 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4482, 1.6402, 1.6856, 0.9144, 1.7483, 1.9450, 2.0142, 1.4611], device='cuda:6'), covar=tensor([0.0818, 0.0502, 0.0479, 0.0509, 0.0423, 0.0502, 0.0261, 0.0657], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0153, 0.0123, 0.0132, 0.0130, 0.0127, 0.0144, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3763e-05, 1.1204e-04, 8.8928e-05, 9.5236e-05, 9.2236e-05, 9.1944e-05, 1.0473e-04, 1.0673e-04], device='cuda:6') 2023-03-26 14:50:19,263 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66816.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:50:34,226 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.175e+02 1.668e+02 2.101e+02 2.666e+02 4.038e+02, threshold=4.202e+02, percent-clipped=1.0 2023-03-26 14:50:40,361 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66848.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:50:45,463 INFO [finetune.py:976] (6/7) Epoch 12, batch 3850, loss[loss=0.2431, simple_loss=0.2964, pruned_loss=0.09489, over 4830.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2615, pruned_loss=0.06453, over 952672.39 frames. ], batch size: 30, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:50:51,376 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66864.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:50:52,598 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8421, 1.6552, 2.2200, 1.5110, 1.8468, 2.2822, 1.6500, 2.2915], device='cuda:6'), covar=tensor([0.1288, 0.2073, 0.1274, 0.1887, 0.0872, 0.1295, 0.2567, 0.0817], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0207, 0.0196, 0.0193, 0.0180, 0.0215, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:51:00,116 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=66877.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:51:04,881 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-26 14:51:18,886 INFO [finetune.py:976] (6/7) Epoch 12, batch 3900, loss[loss=0.1503, simple_loss=0.2316, pruned_loss=0.03444, over 4763.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2584, pruned_loss=0.06323, over 953728.86 frames. ], batch size: 28, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:51:24,990 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=66914.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:51:40,886 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.325e+01 1.578e+02 1.785e+02 2.294e+02 5.103e+02, threshold=3.570e+02, percent-clipped=1.0 2023-03-26 14:51:51,223 INFO [finetune.py:976] (6/7) Epoch 12, batch 3950, loss[loss=0.1428, simple_loss=0.2111, pruned_loss=0.03726, over 4832.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2532, pruned_loss=0.061, over 954353.20 frames. ], batch size: 25, lr: 3.64e-03, grad_scale: 32.0 2023-03-26 14:51:53,025 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=66957.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:51:58,483 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=66962.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:52:47,109 INFO [finetune.py:976] (6/7) Epoch 12, batch 4000, loss[loss=0.1716, simple_loss=0.243, pruned_loss=0.05014, over 4924.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2526, pruned_loss=0.06048, over 954884.98 frames. ], batch size: 37, lr: 3.63e-03, grad_scale: 32.0 2023-03-26 14:53:04,485 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67018.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:53:18,612 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.442e+01 1.595e+02 2.014e+02 2.521e+02 4.335e+02, threshold=4.027e+02, percent-clipped=3.0 2023-03-26 14:53:28,874 INFO [finetune.py:976] (6/7) Epoch 12, batch 4050, loss[loss=0.1955, simple_loss=0.2768, pruned_loss=0.0571, over 4800.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2566, pruned_loss=0.06144, over 954278.72 frames. ], batch size: 45, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:53:47,881 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67083.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:53:50,805 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67087.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:53:57,399 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1037, 1.7311, 2.0063, 2.0194, 1.6783, 1.7773, 1.9464, 1.8859], device='cuda:6'), covar=tensor([0.4255, 0.4343, 0.3520, 0.4199, 0.4964, 0.3951, 0.5238, 0.3358], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0237, 0.0254, 0.0259, 0.0257, 0.0232, 0.0274, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:53:57,997 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3793, 2.2481, 1.7638, 2.4970, 2.2947, 1.9561, 2.8484, 2.3646], device='cuda:6'), covar=tensor([0.1368, 0.2512, 0.3140, 0.2730, 0.2482, 0.1638, 0.2931, 0.1823], device='cuda:6'), in_proj_covar=tensor([0.0179, 0.0188, 0.0234, 0.0256, 0.0244, 0.0199, 0.0214, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:54:01,618 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 14:54:02,030 INFO [finetune.py:976] (6/7) Epoch 12, batch 4100, loss[loss=0.2637, simple_loss=0.3169, pruned_loss=0.1052, over 4831.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2613, pruned_loss=0.06402, over 956343.02 frames. ], batch size: 49, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:54:29,986 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.725e+02 1.998e+02 2.409e+02 3.172e+02, threshold=3.997e+02, percent-clipped=0.0 2023-03-26 14:54:37,633 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:54:44,210 INFO [finetune.py:976] (6/7) Epoch 12, batch 4150, loss[loss=0.1726, simple_loss=0.2465, pruned_loss=0.04932, over 4225.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2619, pruned_loss=0.0638, over 955242.76 frames. ], batch size: 65, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:54:57,233 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8274, 1.3637, 1.8659, 1.8165, 1.5996, 1.5585, 1.7212, 1.6650], device='cuda:6'), covar=tensor([0.3986, 0.4108, 0.3404, 0.3983, 0.5162, 0.4005, 0.4705, 0.3506], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0238, 0.0254, 0.0260, 0.0258, 0.0233, 0.0274, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:54:59,070 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67177.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:55:17,546 INFO [finetune.py:976] (6/7) Epoch 12, batch 4200, loss[loss=0.208, simple_loss=0.2802, pruned_loss=0.0679, over 4866.00 frames. ], tot_loss[loss=0.1955, simple_loss=0.2631, pruned_loss=0.06392, over 956861.49 frames. ], batch size: 34, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:55:21,763 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9228, 3.4972, 3.6763, 3.5769, 3.5240, 3.3963, 4.0298, 1.4218], device='cuda:6'), covar=tensor([0.1235, 0.1462, 0.1375, 0.1850, 0.1905, 0.2047, 0.1216, 0.6653], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0241, 0.0274, 0.0289, 0.0327, 0.0280, 0.0299, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:55:24,769 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6012, 1.7037, 1.4010, 1.5592, 2.0239, 1.8818, 1.6698, 1.4626], device='cuda:6'), covar=tensor([0.0313, 0.0269, 0.0556, 0.0303, 0.0183, 0.0507, 0.0283, 0.0408], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0109, 0.0139, 0.0113, 0.0102, 0.0104, 0.0094, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.2191e-05, 8.4561e-05, 1.1036e-04, 8.8156e-05, 7.9601e-05, 7.7140e-05, 7.1326e-05, 8.3539e-05], device='cuda:6') 2023-03-26 14:55:30,584 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67225.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:55:39,457 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.549e+02 1.852e+02 2.427e+02 4.145e+02, threshold=3.704e+02, percent-clipped=1.0 2023-03-26 14:55:50,531 INFO [finetune.py:976] (6/7) Epoch 12, batch 4250, loss[loss=0.1628, simple_loss=0.2299, pruned_loss=0.0478, over 4798.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2602, pruned_loss=0.06286, over 957470.75 frames. ], batch size: 51, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:56:32,186 INFO [finetune.py:976] (6/7) Epoch 12, batch 4300, loss[loss=0.1704, simple_loss=0.2431, pruned_loss=0.04881, over 4902.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2569, pruned_loss=0.06203, over 956722.35 frames. ], batch size: 35, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:56:37,170 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67313.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:56:47,467 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.0287, 4.3517, 4.5955, 4.8316, 4.7713, 4.4755, 5.1292, 1.6505], device='cuda:6'), covar=tensor([0.0605, 0.0765, 0.0675, 0.0760, 0.1003, 0.1321, 0.0435, 0.5505], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0242, 0.0275, 0.0290, 0.0327, 0.0280, 0.0300, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:56:52,172 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8313, 0.7409, 1.7439, 1.6546, 1.5740, 1.4997, 1.5151, 1.6907], device='cuda:6'), covar=tensor([0.3851, 0.4060, 0.4056, 0.3732, 0.5073, 0.4012, 0.4468, 0.3585], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0239, 0.0255, 0.0261, 0.0258, 0.0233, 0.0274, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 14:56:54,401 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.954e+01 1.550e+02 1.913e+02 2.348e+02 5.397e+02, threshold=3.825e+02, percent-clipped=3.0 2023-03-26 14:57:05,082 INFO [finetune.py:976] (6/7) Epoch 12, batch 4350, loss[loss=0.1639, simple_loss=0.2296, pruned_loss=0.0491, over 4793.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.2532, pruned_loss=0.06089, over 955744.13 frames. ], batch size: 29, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:57:28,530 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67387.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 14:57:39,912 INFO [finetune.py:976] (6/7) Epoch 12, batch 4400, loss[loss=0.2864, simple_loss=0.3219, pruned_loss=0.1255, over 4915.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2541, pruned_loss=0.06132, over 956873.91 frames. ], batch size: 36, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:57:59,502 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0072, 0.9960, 0.9620, 1.1121, 1.1652, 1.0972, 1.0073, 0.9629], device='cuda:6'), covar=tensor([0.0356, 0.0281, 0.0574, 0.0266, 0.0261, 0.0479, 0.0343, 0.0371], device='cuda:6'), in_proj_covar=tensor([0.0092, 0.0109, 0.0139, 0.0113, 0.0102, 0.0104, 0.0094, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.1985e-05, 8.4404e-05, 1.1035e-04, 8.8298e-05, 7.9599e-05, 7.7110e-05, 7.1107e-05, 8.3550e-05], device='cuda:6') 2023-03-26 14:58:04,824 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1080, 2.0325, 1.9049, 2.0872, 1.6787, 3.9742, 1.9221, 2.4530], device='cuda:6'), covar=tensor([0.3106, 0.2261, 0.1955, 0.2064, 0.1557, 0.0174, 0.2218, 0.1144], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0116, 0.0098, 0.0098, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 14:58:14,146 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67435.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 14:58:16,972 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.656e+02 1.972e+02 2.339e+02 4.406e+02, threshold=3.944e+02, percent-clipped=2.0 2023-03-26 14:58:20,629 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67439.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:58:30,809 INFO [finetune.py:976] (6/7) Epoch 12, batch 4450, loss[loss=0.2274, simple_loss=0.2891, pruned_loss=0.08281, over 4841.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2593, pruned_loss=0.06374, over 956664.28 frames. ], batch size: 49, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:59:03,972 INFO [finetune.py:976] (6/7) Epoch 12, batch 4500, loss[loss=0.1758, simple_loss=0.2445, pruned_loss=0.05352, over 4760.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2604, pruned_loss=0.06418, over 955867.61 frames. ], batch size: 27, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:59:09,866 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67513.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 14:59:13,030 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 14:59:26,033 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.686e+02 1.980e+02 2.352e+02 4.001e+02, threshold=3.961e+02, percent-clipped=1.0 2023-03-26 14:59:37,242 INFO [finetune.py:976] (6/7) Epoch 12, batch 4550, loss[loss=0.2019, simple_loss=0.2514, pruned_loss=0.07625, over 4406.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2621, pruned_loss=0.0642, over 954434.16 frames. ], batch size: 19, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 14:59:56,262 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67574.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:00:18,089 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 15:00:19,986 INFO [finetune.py:976] (6/7) Epoch 12, batch 4600, loss[loss=0.1773, simple_loss=0.2457, pruned_loss=0.05444, over 4809.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.2616, pruned_loss=0.0637, over 953626.49 frames. ], batch size: 41, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:00:21,346 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6172, 2.8455, 2.6401, 1.9924, 2.8780, 3.0537, 2.8516, 2.6484], device='cuda:6'), covar=tensor([0.0674, 0.0542, 0.0753, 0.0903, 0.0544, 0.0625, 0.0642, 0.0841], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0133, 0.0141, 0.0124, 0.0122, 0.0141, 0.0142, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:00:24,393 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9976, 1.3272, 2.0073, 1.9298, 1.7728, 1.7179, 1.8431, 1.8478], device='cuda:6'), covar=tensor([0.3768, 0.4123, 0.3708, 0.3755, 0.5113, 0.3739, 0.4939, 0.3556], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0238, 0.0255, 0.0261, 0.0257, 0.0232, 0.0274, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:00:24,950 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:00:39,201 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5041, 1.6067, 1.7205, 0.9277, 1.6926, 1.8972, 1.9658, 1.4473], device='cuda:6'), covar=tensor([0.1014, 0.0721, 0.0524, 0.0660, 0.0474, 0.0602, 0.0298, 0.0844], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0155, 0.0124, 0.0133, 0.0133, 0.0128, 0.0146, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.4176e-05, 1.1357e-04, 8.8949e-05, 9.6203e-05, 9.4159e-05, 9.2843e-05, 1.0619e-04, 1.0742e-04], device='cuda:6') 2023-03-26 15:00:42,118 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.172e+02 1.477e+02 1.878e+02 2.272e+02 4.960e+02, threshold=3.756e+02, percent-clipped=1.0 2023-03-26 15:00:53,234 INFO [finetune.py:976] (6/7) Epoch 12, batch 4650, loss[loss=0.1614, simple_loss=0.2391, pruned_loss=0.04184, over 4765.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2588, pruned_loss=0.06302, over 954732.88 frames. ], batch size: 28, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:00:56,984 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67661.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:00:59,888 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 15:01:10,451 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:17,717 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:31,299 INFO [finetune.py:976] (6/7) Epoch 12, batch 4700, loss[loss=0.1808, simple_loss=0.2488, pruned_loss=0.05639, over 4771.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2561, pruned_loss=0.06216, over 956260.63 frames. ], batch size: 26, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:01:52,247 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.4503, 4.7258, 5.0212, 5.2105, 5.1582, 4.9523, 5.5745, 1.9788], device='cuda:6'), covar=tensor([0.0806, 0.0826, 0.0753, 0.0991, 0.1165, 0.1528, 0.0556, 0.5350], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0243, 0.0276, 0.0290, 0.0329, 0.0280, 0.0301, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:01:56,972 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.553e+02 1.823e+02 2.116e+02 3.808e+02, threshold=3.646e+02, percent-clipped=1.0 2023-03-26 15:01:57,077 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=67739.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:01:58,309 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:02:05,961 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=67753.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:02:07,256 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 15:02:07,551 INFO [finetune.py:976] (6/7) Epoch 12, batch 4750, loss[loss=0.1748, simple_loss=0.2309, pruned_loss=0.05937, over 4255.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2537, pruned_loss=0.06132, over 957295.59 frames. ], batch size: 18, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:02:12,996 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8517, 1.1235, 1.8280, 1.7851, 1.5694, 1.5247, 1.6954, 1.6690], device='cuda:6'), covar=tensor([0.3605, 0.4277, 0.3436, 0.3690, 0.4881, 0.3692, 0.4534, 0.3176], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0238, 0.0255, 0.0261, 0.0257, 0.0232, 0.0274, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:02:19,336 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 15:02:28,910 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=67787.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:02:38,192 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9126, 1.8262, 1.4973, 1.5944, 1.7076, 1.6535, 1.7234, 2.4146], device='cuda:6'), covar=tensor([0.4472, 0.4562, 0.3529, 0.4688, 0.4391, 0.2744, 0.4203, 0.1850], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0259, 0.0223, 0.0276, 0.0243, 0.0210, 0.0246, 0.0218], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:02:40,329 INFO [finetune.py:976] (6/7) Epoch 12, batch 4800, loss[loss=0.2049, simple_loss=0.2783, pruned_loss=0.06572, over 4904.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2578, pruned_loss=0.06325, over 957033.93 frames. ], batch size: 35, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:02:51,785 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9455, 1.5187, 0.9507, 2.0100, 2.3765, 1.5642, 1.9078, 1.8658], device='cuda:6'), covar=tensor([0.1347, 0.1933, 0.1884, 0.1008, 0.1733, 0.1824, 0.1320, 0.1855], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0113, 0.0092, 0.0121, 0.0094, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 15:03:07,509 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.086e+02 1.756e+02 1.975e+02 2.556e+02 4.813e+02, threshold=3.950e+02, percent-clipped=3.0 2023-03-26 15:03:25,929 INFO [finetune.py:976] (6/7) Epoch 12, batch 4850, loss[loss=0.2061, simple_loss=0.2758, pruned_loss=0.06822, over 4776.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.2604, pruned_loss=0.06369, over 955882.42 frames. ], batch size: 28, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:03:39,910 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=67869.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:03:52,320 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8849, 1.8399, 1.6799, 1.9662, 2.3497, 2.0014, 1.7049, 1.5581], device='cuda:6'), covar=tensor([0.2291, 0.2047, 0.1872, 0.1602, 0.1777, 0.1172, 0.2308, 0.1971], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0206, 0.0210, 0.0190, 0.0239, 0.0182, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:04:03,159 INFO [finetune.py:976] (6/7) Epoch 12, batch 4900, loss[loss=0.2156, simple_loss=0.2697, pruned_loss=0.08079, over 4838.00 frames. ], tot_loss[loss=0.1957, simple_loss=0.2626, pruned_loss=0.06441, over 955692.47 frames. ], batch size: 31, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:04:26,939 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.717e+02 1.971e+02 2.418e+02 4.222e+02, threshold=3.942e+02, percent-clipped=1.0 2023-03-26 15:04:36,658 INFO [finetune.py:976] (6/7) Epoch 12, batch 4950, loss[loss=0.166, simple_loss=0.2197, pruned_loss=0.05611, over 3907.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2622, pruned_loss=0.06412, over 953929.66 frames. ], batch size: 17, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:04:46,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9929, 2.1082, 1.7967, 1.7735, 2.4337, 2.5093, 2.1209, 1.9956], device='cuda:6'), covar=tensor([0.0471, 0.0434, 0.0527, 0.0392, 0.0307, 0.0517, 0.0378, 0.0443], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0109, 0.0140, 0.0114, 0.0103, 0.0105, 0.0095, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.2514e-05, 8.4821e-05, 1.1120e-04, 8.8726e-05, 8.0181e-05, 7.7782e-05, 7.1915e-05, 8.4189e-05], device='cuda:6') 2023-03-26 15:04:53,973 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=67981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:05:20,900 INFO [finetune.py:976] (6/7) Epoch 12, batch 5000, loss[loss=0.207, simple_loss=0.2616, pruned_loss=0.07617, over 4823.00 frames. ], tot_loss[loss=0.1934, simple_loss=0.2601, pruned_loss=0.06333, over 955594.94 frames. ], batch size: 33, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:05:30,400 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3312, 1.3282, 1.2150, 1.3784, 1.5903, 1.4407, 1.3033, 1.1775], device='cuda:6'), covar=tensor([0.0358, 0.0240, 0.0534, 0.0224, 0.0217, 0.0427, 0.0336, 0.0419], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0109, 0.0141, 0.0114, 0.0103, 0.0105, 0.0095, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.2556e-05, 8.4701e-05, 1.1156e-04, 8.8733e-05, 8.0213e-05, 7.7987e-05, 7.1936e-05, 8.4348e-05], device='cuda:6') 2023-03-26 15:05:41,214 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:05:43,409 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.964e+01 1.543e+02 1.867e+02 2.301e+02 3.447e+02, threshold=3.734e+02, percent-clipped=0.0 2023-03-26 15:05:46,835 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68042.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:05:50,405 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68048.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:05:54,509 INFO [finetune.py:976] (6/7) Epoch 12, batch 5050, loss[loss=0.1893, simple_loss=0.2511, pruned_loss=0.06372, over 4905.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2573, pruned_loss=0.06269, over 955179.13 frames. ], batch size: 32, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:06:15,510 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68087.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:06:16,671 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3871, 3.8167, 4.0074, 4.2119, 4.1478, 3.8693, 4.4548, 1.3731], device='cuda:6'), covar=tensor([0.0789, 0.0797, 0.0899, 0.0995, 0.1213, 0.1479, 0.0821, 0.5384], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0242, 0.0276, 0.0290, 0.0329, 0.0281, 0.0299, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:06:20,682 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3408, 2.2288, 2.1688, 2.4092, 2.7369, 2.3714, 1.9918, 1.8377], device='cuda:6'), covar=tensor([0.2108, 0.2004, 0.1783, 0.1620, 0.1799, 0.1070, 0.2272, 0.1801], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0205, 0.0209, 0.0189, 0.0239, 0.0181, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:06:22,354 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5762, 3.3095, 3.1385, 1.3754, 3.5240, 2.6308, 0.9311, 2.3485], device='cuda:6'), covar=tensor([0.2684, 0.2051, 0.1873, 0.3588, 0.1298, 0.1056, 0.4110, 0.1641], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0161, 0.0130, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 15:06:27,689 INFO [finetune.py:976] (6/7) Epoch 12, batch 5100, loss[loss=0.1474, simple_loss=0.2213, pruned_loss=0.03679, over 4869.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2538, pruned_loss=0.06141, over 954024.51 frames. ], batch size: 31, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:06:36,259 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3864, 2.3089, 1.7621, 0.9621, 2.0203, 1.8942, 1.7043, 2.0049], device='cuda:6'), covar=tensor([0.0892, 0.0746, 0.1643, 0.2029, 0.1518, 0.2280, 0.2268, 0.1069], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0197, 0.0199, 0.0185, 0.0213, 0.0206, 0.0221, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:06:59,402 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.565e+02 1.837e+02 2.198e+02 4.078e+02, threshold=3.675e+02, percent-clipped=2.0 2023-03-26 15:07:05,469 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68148.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:07:10,949 INFO [finetune.py:976] (6/7) Epoch 12, batch 5150, loss[loss=0.1873, simple_loss=0.258, pruned_loss=0.0583, over 4843.00 frames. ], tot_loss[loss=0.1904, simple_loss=0.2553, pruned_loss=0.06273, over 953477.45 frames. ], batch size: 33, lr: 3.63e-03, grad_scale: 16.0 2023-03-26 15:07:16,492 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.3781, 4.5717, 4.8643, 5.2045, 5.0482, 4.7532, 5.4072, 1.7111], device='cuda:6'), covar=tensor([0.0748, 0.0869, 0.0688, 0.0922, 0.1197, 0.1610, 0.0585, 0.5863], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0241, 0.0275, 0.0290, 0.0328, 0.0281, 0.0299, 0.0292], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:07:19,523 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:07:43,716 INFO [finetune.py:976] (6/7) Epoch 12, batch 5200, loss[loss=0.2504, simple_loss=0.322, pruned_loss=0.08937, over 4841.00 frames. ], tot_loss[loss=0.1942, simple_loss=0.2593, pruned_loss=0.06455, over 952004.82 frames. ], batch size: 51, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:07:48,154 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0425, 1.4087, 1.9216, 1.9943, 1.7328, 1.6767, 1.8834, 1.7479], device='cuda:6'), covar=tensor([0.3485, 0.4059, 0.3587, 0.3679, 0.4713, 0.3716, 0.4479, 0.3389], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0238, 0.0255, 0.0261, 0.0258, 0.0233, 0.0275, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:07:51,092 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68217.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:08:05,785 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.664e+02 1.889e+02 2.252e+02 3.665e+02, threshold=3.778e+02, percent-clipped=0.0 2023-03-26 15:08:12,437 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5873, 3.9912, 4.2119, 4.3453, 4.3062, 4.1290, 4.7072, 1.6225], device='cuda:6'), covar=tensor([0.0832, 0.0765, 0.0751, 0.0967, 0.1249, 0.1446, 0.0541, 0.5672], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0242, 0.0276, 0.0291, 0.0329, 0.0282, 0.0300, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:08:16,525 INFO [finetune.py:976] (6/7) Epoch 12, batch 5250, loss[loss=0.2307, simple_loss=0.3038, pruned_loss=0.07878, over 4720.00 frames. ], tot_loss[loss=0.1953, simple_loss=0.2613, pruned_loss=0.06461, over 953579.48 frames. ], batch size: 59, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:08:24,216 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7503, 1.2696, 1.0008, 1.6440, 2.0179, 1.3175, 1.5431, 1.7074], device='cuda:6'), covar=tensor([0.1212, 0.1742, 0.1638, 0.0959, 0.1797, 0.2004, 0.1154, 0.1483], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0092, 0.0121, 0.0094, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 15:08:50,035 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9578, 1.7802, 1.5666, 1.6699, 1.6908, 1.7366, 1.6796, 2.4266], device='cuda:6'), covar=tensor([0.4117, 0.4687, 0.3665, 0.4336, 0.4427, 0.2398, 0.4654, 0.1664], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0259, 0.0224, 0.0275, 0.0243, 0.0211, 0.0246, 0.0219], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:08:59,702 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7233, 3.8754, 3.6991, 1.9983, 4.0234, 2.9691, 0.7253, 2.6583], device='cuda:6'), covar=tensor([0.2354, 0.1636, 0.1400, 0.2999, 0.0849, 0.1011, 0.4423, 0.1398], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 15:09:03,120 INFO [finetune.py:976] (6/7) Epoch 12, batch 5300, loss[loss=0.2043, simple_loss=0.2694, pruned_loss=0.06958, over 4240.00 frames. ], tot_loss[loss=0.1957, simple_loss=0.2622, pruned_loss=0.06459, over 954938.11 frames. ], batch size: 66, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:09:13,897 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6750, 1.4956, 1.3925, 1.7596, 2.0441, 1.7964, 1.2521, 1.4447], device='cuda:6'), covar=tensor([0.2433, 0.2254, 0.2164, 0.1737, 0.1704, 0.1233, 0.2624, 0.2067], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0206, 0.0209, 0.0189, 0.0239, 0.0181, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:09:24,971 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:25,577 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:26,706 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.294e+02 1.838e+02 2.123e+02 2.651e+02 4.524e+02, threshold=4.245e+02, percent-clipped=5.0 2023-03-26 15:09:32,249 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:09:36,485 INFO [finetune.py:976] (6/7) Epoch 12, batch 5350, loss[loss=0.1782, simple_loss=0.2511, pruned_loss=0.05264, over 4700.00 frames. ], tot_loss[loss=0.1939, simple_loss=0.261, pruned_loss=0.06346, over 954719.90 frames. ], batch size: 59, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:09:47,283 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.89 vs. limit=5.0 2023-03-26 15:09:55,985 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68384.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:10:02,970 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1184, 0.9862, 0.9889, 0.3899, 0.8589, 1.1490, 1.1945, 1.0136], device='cuda:6'), covar=tensor([0.0852, 0.0559, 0.0551, 0.0556, 0.0536, 0.0635, 0.0370, 0.0696], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0156, 0.0125, 0.0134, 0.0133, 0.0129, 0.0147, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.4641e-05, 1.1435e-04, 9.0227e-05, 9.6345e-05, 9.4612e-05, 9.3454e-05, 1.0705e-04, 1.0822e-04], device='cuda:6') 2023-03-26 15:10:03,556 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7582, 1.7802, 1.8489, 1.1658, 1.9072, 1.9407, 1.8351, 1.5907], device='cuda:6'), covar=tensor([0.0558, 0.0608, 0.0651, 0.0889, 0.0612, 0.0608, 0.0612, 0.1042], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0133, 0.0142, 0.0124, 0.0122, 0.0141, 0.0142, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:10:04,693 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:10:06,581 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:10:10,286 INFO [finetune.py:976] (6/7) Epoch 12, batch 5400, loss[loss=0.1698, simple_loss=0.238, pruned_loss=0.05081, over 4805.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2599, pruned_loss=0.06354, over 953330.04 frames. ], batch size: 45, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:10:15,190 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2098, 1.9680, 2.1225, 0.8864, 2.3578, 2.5350, 2.2431, 1.9519], device='cuda:6'), covar=tensor([0.0905, 0.0775, 0.0579, 0.0700, 0.0507, 0.0544, 0.0458, 0.0747], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0156, 0.0125, 0.0133, 0.0133, 0.0128, 0.0146, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.4352e-05, 1.1406e-04, 8.9932e-05, 9.6170e-05, 9.4359e-05, 9.3212e-05, 1.0677e-04, 1.0787e-04], device='cuda:6') 2023-03-26 15:10:29,288 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4116, 2.1969, 1.8684, 2.0630, 2.1608, 2.1145, 2.0866, 2.8813], device='cuda:6'), covar=tensor([0.3985, 0.4676, 0.3641, 0.3967, 0.3967, 0.2706, 0.4170, 0.1746], device='cuda:6'), in_proj_covar=tensor([0.0282, 0.0257, 0.0222, 0.0273, 0.0242, 0.0209, 0.0244, 0.0218], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:10:40,848 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.541e+02 1.801e+02 2.082e+02 4.267e+02, threshold=3.602e+02, percent-clipped=1.0 2023-03-26 15:10:44,357 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68443.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:10:51,598 INFO [finetune.py:976] (6/7) Epoch 12, batch 5450, loss[loss=0.1481, simple_loss=0.2211, pruned_loss=0.03752, over 4824.00 frames. ], tot_loss[loss=0.19, simple_loss=0.256, pruned_loss=0.062, over 954341.22 frames. ], batch size: 38, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:10:54,774 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68460.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:11:16,053 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 15:11:24,505 INFO [finetune.py:976] (6/7) Epoch 12, batch 5500, loss[loss=0.1711, simple_loss=0.2474, pruned_loss=0.04735, over 4822.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2529, pruned_loss=0.06056, over 956268.75 frames. ], batch size: 51, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:11:33,340 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-26 15:11:47,045 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.509e+02 1.942e+02 2.407e+02 6.603e+02, threshold=3.884e+02, percent-clipped=3.0 2023-03-26 15:11:54,525 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 15:11:59,912 INFO [finetune.py:976] (6/7) Epoch 12, batch 5550, loss[loss=0.2097, simple_loss=0.2819, pruned_loss=0.06875, over 4834.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2535, pruned_loss=0.06072, over 953989.42 frames. ], batch size: 49, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:12:23,724 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68578.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:12:24,408 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.77 vs. limit=5.0 2023-03-26 15:12:39,653 INFO [finetune.py:976] (6/7) Epoch 12, batch 5600, loss[loss=0.1568, simple_loss=0.2275, pruned_loss=0.04306, over 4813.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2572, pruned_loss=0.06152, over 953759.42 frames. ], batch size: 25, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:12:43,649 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 15:12:52,002 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2047, 1.3564, 1.3937, 0.7181, 1.2704, 1.5668, 1.6250, 1.2975], device='cuda:6'), covar=tensor([0.0778, 0.0467, 0.0432, 0.0444, 0.0418, 0.0468, 0.0262, 0.0485], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0156, 0.0125, 0.0133, 0.0133, 0.0129, 0.0146, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.4001e-05, 1.1400e-04, 8.9902e-05, 9.5957e-05, 9.4126e-05, 9.3445e-05, 1.0655e-04, 1.0742e-04], device='cuda:6') 2023-03-26 15:12:58,317 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68637.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:12:59,421 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.238e+02 1.664e+02 1.965e+02 2.319e+02 3.885e+02, threshold=3.931e+02, percent-clipped=1.0 2023-03-26 15:12:59,538 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68639.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:13:09,174 INFO [finetune.py:976] (6/7) Epoch 12, batch 5650, loss[loss=0.2168, simple_loss=0.2902, pruned_loss=0.07172, over 4909.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2607, pruned_loss=0.06259, over 954456.59 frames. ], batch size: 43, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:13:15,018 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.80 vs. limit=5.0 2023-03-26 15:13:27,861 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68685.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:13:27,902 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=68685.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:13:41,827 INFO [finetune.py:976] (6/7) Epoch 12, batch 5700, loss[loss=0.2094, simple_loss=0.2643, pruned_loss=0.07724, over 4306.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2573, pruned_loss=0.06212, over 938607.27 frames. ], batch size: 18, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:13:42,563 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4317, 2.2316, 1.9607, 2.2944, 2.2058, 2.1917, 2.1470, 2.8750], device='cuda:6'), covar=tensor([0.4259, 0.5014, 0.3741, 0.4051, 0.4082, 0.2756, 0.4513, 0.1861], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0225, 0.0277, 0.0245, 0.0211, 0.0248, 0.0221], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:14:27,875 INFO [finetune.py:976] (6/7) Epoch 13, batch 0, loss[loss=0.1807, simple_loss=0.2479, pruned_loss=0.05671, over 4835.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2479, pruned_loss=0.05671, over 4835.00 frames. ], batch size: 49, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:14:27,876 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 15:14:42,135 INFO [finetune.py:1010] (6/7) Epoch 13, validation: loss=0.1598, simple_loss=0.23, pruned_loss=0.04482, over 2265189.00 frames. 2023-03-26 15:14:42,136 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 15:14:47,268 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.546e+02 1.915e+02 2.253e+02 4.332e+02, threshold=3.830e+02, percent-clipped=1.0 2023-03-26 15:14:49,804 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=68743.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:14:52,140 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=68746.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:14:56,369 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4210, 1.5354, 1.5538, 0.8208, 1.6954, 1.8529, 1.8631, 1.4237], device='cuda:6'), covar=tensor([0.0896, 0.0603, 0.0581, 0.0565, 0.0430, 0.0558, 0.0300, 0.0653], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0155, 0.0123, 0.0132, 0.0132, 0.0127, 0.0145, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.3373e-05, 1.1311e-04, 8.8848e-05, 9.5292e-05, 9.3380e-05, 9.2446e-05, 1.0547e-04, 1.0665e-04], device='cuda:6') 2023-03-26 15:14:58,517 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68755.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:15:13,122 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2184, 2.1708, 2.1983, 1.6357, 2.3046, 2.4427, 2.2242, 1.9228], device='cuda:6'), covar=tensor([0.0615, 0.0659, 0.0786, 0.0953, 0.0639, 0.0734, 0.0748, 0.1118], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0133, 0.0142, 0.0125, 0.0122, 0.0141, 0.0142, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:15:15,981 INFO [finetune.py:976] (6/7) Epoch 13, batch 50, loss[loss=0.1354, simple_loss=0.2187, pruned_loss=0.026, over 4782.00 frames. ], tot_loss[loss=0.2, simple_loss=0.265, pruned_loss=0.06747, over 215400.49 frames. ], batch size: 29, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:15:21,852 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=68791.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:15:22,507 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4106, 2.0804, 1.6201, 0.8021, 1.8382, 1.8916, 1.7951, 1.9891], device='cuda:6'), covar=tensor([0.0916, 0.0858, 0.1650, 0.2063, 0.1532, 0.2084, 0.2112, 0.0937], device='cuda:6'), in_proj_covar=tensor([0.0164, 0.0195, 0.0196, 0.0184, 0.0211, 0.0205, 0.0220, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:15:27,198 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1661, 1.4462, 2.1569, 1.9946, 1.8601, 1.8031, 1.9168, 1.9808], device='cuda:6'), covar=tensor([0.3524, 0.4010, 0.3454, 0.3616, 0.4797, 0.3843, 0.4518, 0.3235], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0237, 0.0254, 0.0260, 0.0256, 0.0232, 0.0273, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:15:56,576 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2599, 2.1377, 1.6480, 2.2572, 2.1573, 1.8657, 2.5330, 2.2273], device='cuda:6'), covar=tensor([0.1259, 0.2413, 0.3407, 0.2676, 0.2547, 0.1773, 0.3089, 0.1974], device='cuda:6'), in_proj_covar=tensor([0.0177, 0.0186, 0.0231, 0.0253, 0.0240, 0.0198, 0.0211, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:15:57,665 INFO [finetune.py:976] (6/7) Epoch 13, batch 100, loss[loss=0.2104, simple_loss=0.2707, pruned_loss=0.07512, over 4820.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2587, pruned_loss=0.06446, over 379050.39 frames. ], batch size: 38, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:16:02,754 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.134e+02 1.681e+02 1.901e+02 2.429e+02 4.753e+02, threshold=3.802e+02, percent-clipped=2.0 2023-03-26 15:16:07,214 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 15:16:31,429 INFO [finetune.py:976] (6/7) Epoch 13, batch 150, loss[loss=0.1432, simple_loss=0.2122, pruned_loss=0.03709, over 4775.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2535, pruned_loss=0.06202, over 507468.97 frames. ], batch size: 26, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:16:59,139 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6595, 1.5431, 1.4840, 1.6735, 1.2051, 3.5412, 1.2929, 1.7822], device='cuda:6'), covar=tensor([0.3301, 0.2417, 0.2173, 0.2244, 0.1774, 0.0200, 0.2569, 0.1360], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:17:05,106 INFO [finetune.py:976] (6/7) Epoch 13, batch 200, loss[loss=0.2128, simple_loss=0.2816, pruned_loss=0.07198, over 4850.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2514, pruned_loss=0.06153, over 605491.43 frames. ], batch size: 44, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:17:05,765 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=68934.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:17:09,210 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.222e+02 1.603e+02 1.930e+02 2.189e+02 8.191e+02, threshold=3.861e+02, percent-clipped=2.0 2023-03-26 15:17:46,316 INFO [finetune.py:976] (6/7) Epoch 13, batch 250, loss[loss=0.2264, simple_loss=0.3039, pruned_loss=0.07443, over 4849.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2532, pruned_loss=0.06186, over 683875.91 frames. ], batch size: 44, lr: 3.62e-03, grad_scale: 16.0 2023-03-26 15:18:05,435 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6640, 2.3097, 3.0819, 4.5330, 3.3763, 3.1573, 1.4032, 3.8814], device='cuda:6'), covar=tensor([0.1377, 0.1171, 0.1133, 0.0536, 0.0600, 0.1129, 0.1716, 0.0334], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0116, 0.0134, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:18:19,712 INFO [finetune.py:976] (6/7) Epoch 13, batch 300, loss[loss=0.2509, simple_loss=0.3107, pruned_loss=0.09555, over 4870.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2589, pruned_loss=0.06438, over 744012.31 frames. ], batch size: 34, lr: 3.62e-03, grad_scale: 32.0 2023-03-26 15:18:23,318 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.585e+02 1.877e+02 2.328e+02 4.201e+02, threshold=3.755e+02, percent-clipped=2.0 2023-03-26 15:18:24,577 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69041.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:18:26,350 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69043.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:18:30,875 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5057, 1.9037, 1.4181, 1.5923, 2.0826, 1.9405, 1.7508, 1.7307], device='cuda:6'), covar=tensor([0.0415, 0.0307, 0.0545, 0.0319, 0.0247, 0.0518, 0.0311, 0.0377], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0108, 0.0139, 0.0113, 0.0101, 0.0104, 0.0094, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2202e-05, 8.3653e-05, 1.1029e-04, 8.7830e-05, 7.8874e-05, 7.7218e-05, 7.1315e-05, 8.3284e-05], device='cuda:6') 2023-03-26 15:18:34,495 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:18:55,353 INFO [finetune.py:976] (6/7) Epoch 13, batch 350, loss[loss=0.1794, simple_loss=0.2499, pruned_loss=0.05449, over 4113.00 frames. ], tot_loss[loss=0.195, simple_loss=0.2609, pruned_loss=0.06452, over 791709.49 frames. ], batch size: 17, lr: 3.62e-03, grad_scale: 32.0 2023-03-26 15:19:18,983 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69103.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:19:19,631 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69104.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:19:41,449 INFO [finetune.py:976] (6/7) Epoch 13, batch 400, loss[loss=0.2386, simple_loss=0.2897, pruned_loss=0.0937, over 4727.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2615, pruned_loss=0.0639, over 828823.05 frames. ], batch size: 54, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:19:50,063 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.689e+02 1.999e+02 2.345e+02 4.076e+02, threshold=3.998e+02, percent-clipped=3.0 2023-03-26 15:20:09,356 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 15:20:09,753 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69162.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:20:13,409 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69168.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:20:23,368 INFO [finetune.py:976] (6/7) Epoch 13, batch 450, loss[loss=0.2304, simple_loss=0.2729, pruned_loss=0.09402, over 4930.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.261, pruned_loss=0.06395, over 855629.08 frames. ], batch size: 38, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:20:29,468 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8441, 1.7194, 1.5467, 1.8262, 2.2402, 1.9262, 1.5378, 1.5016], device='cuda:6'), covar=tensor([0.2100, 0.1972, 0.1881, 0.1646, 0.1705, 0.1154, 0.2385, 0.1818], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0206, 0.0209, 0.0190, 0.0239, 0.0181, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:20:30,192 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-26 15:21:04,205 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69223.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:05,420 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:07,851 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:10,194 INFO [finetune.py:976] (6/7) Epoch 13, batch 500, loss[loss=0.193, simple_loss=0.258, pruned_loss=0.06398, over 4721.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2575, pruned_loss=0.0623, over 878671.00 frames. ], batch size: 59, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:21:10,907 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:14,297 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.659e+02 1.928e+02 2.205e+02 4.798e+02, threshold=3.855e+02, percent-clipped=1.0 2023-03-26 15:21:18,080 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69245.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:21:37,047 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:43,323 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69282.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:43,878 INFO [finetune.py:976] (6/7) Epoch 13, batch 550, loss[loss=0.162, simple_loss=0.2303, pruned_loss=0.04685, over 4808.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2544, pruned_loss=0.06087, over 895711.65 frames. ], batch size: 25, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:21:45,797 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1920, 3.6075, 3.8246, 4.0503, 3.9760, 3.7019, 4.2644, 1.2391], device='cuda:6'), covar=tensor([0.0761, 0.0826, 0.0769, 0.0905, 0.1092, 0.1425, 0.0714, 0.5505], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0242, 0.0275, 0.0291, 0.0327, 0.0281, 0.0300, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:21:45,836 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69286.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:21:59,421 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69306.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:22:00,643 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3307, 3.0395, 2.6041, 1.4711, 2.8112, 2.4553, 2.3065, 2.5936], device='cuda:6'), covar=tensor([0.0855, 0.0861, 0.1927, 0.2288, 0.1889, 0.2063, 0.2059, 0.1295], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0196, 0.0198, 0.0184, 0.0213, 0.0207, 0.0222, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:22:08,839 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:15,818 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5346, 0.9667, 0.7769, 1.3340, 1.8756, 0.7474, 1.2132, 1.3988], device='cuda:6'), covar=tensor([0.1471, 0.2223, 0.1885, 0.1194, 0.2024, 0.1965, 0.1542, 0.1901], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0094, 0.0112, 0.0091, 0.0119, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:22:17,555 INFO [finetune.py:976] (6/7) Epoch 13, batch 600, loss[loss=0.2373, simple_loss=0.2973, pruned_loss=0.08869, over 4749.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2551, pruned_loss=0.06128, over 908750.30 frames. ], batch size: 59, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:22:18,288 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69334.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:22:21,206 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.536e+02 1.861e+02 2.296e+02 3.946e+02, threshold=3.721e+02, percent-clipped=1.0 2023-03-26 15:22:22,499 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69341.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:49,999 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69368.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:55,521 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.32 vs. limit=5.0 2023-03-26 15:22:58,834 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69381.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:22:59,983 INFO [finetune.py:976] (6/7) Epoch 13, batch 650, loss[loss=0.1721, simple_loss=0.2528, pruned_loss=0.04568, over 4898.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2591, pruned_loss=0.06289, over 919958.88 frames. ], batch size: 37, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:23:00,107 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3983, 1.4532, 1.2068, 1.5158, 1.7567, 1.5614, 1.4147, 1.2099], device='cuda:6'), covar=tensor([0.0328, 0.0264, 0.0588, 0.0257, 0.0184, 0.0519, 0.0309, 0.0389], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0109, 0.0140, 0.0113, 0.0102, 0.0105, 0.0095, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.2797e-05, 8.4314e-05, 1.1096e-04, 8.8191e-05, 7.9450e-05, 7.7778e-05, 7.1519e-05, 8.3829e-05], device='cuda:6') 2023-03-26 15:23:02,556 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7887, 1.7683, 1.5509, 1.8687, 2.4579, 1.8695, 1.6352, 1.4382], device='cuda:6'), covar=tensor([0.2279, 0.2103, 0.2042, 0.1714, 0.1680, 0.1293, 0.2459, 0.2017], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0207, 0.0211, 0.0191, 0.0241, 0.0183, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:23:03,694 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69389.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:06,757 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:10,345 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69399.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:30,669 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69429.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:23:33,419 INFO [finetune.py:976] (6/7) Epoch 13, batch 700, loss[loss=0.1884, simple_loss=0.2524, pruned_loss=0.06219, over 4916.00 frames. ], tot_loss[loss=0.1937, simple_loss=0.2607, pruned_loss=0.06331, over 929405.91 frames. ], batch size: 36, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:23:37,531 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.702e+02 1.957e+02 2.425e+02 4.096e+02, threshold=3.913e+02, percent-clipped=2.0 2023-03-26 15:23:47,851 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69455.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:24:06,514 INFO [finetune.py:976] (6/7) Epoch 13, batch 750, loss[loss=0.1762, simple_loss=0.2386, pruned_loss=0.05686, over 4000.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2619, pruned_loss=0.06346, over 936506.18 frames. ], batch size: 17, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:24:12,917 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-26 15:24:40,558 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69518.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:24:44,575 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:24:50,489 INFO [finetune.py:976] (6/7) Epoch 13, batch 800, loss[loss=0.1858, simple_loss=0.248, pruned_loss=0.06175, over 4905.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2615, pruned_loss=0.06299, over 940959.98 frames. ], batch size: 37, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:24:57,837 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.694e+02 1.982e+02 2.355e+02 4.334e+02, threshold=3.964e+02, percent-clipped=1.0 2023-03-26 15:25:18,051 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-26 15:25:26,604 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 15:25:47,576 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69581.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:25:48,752 INFO [finetune.py:976] (6/7) Epoch 13, batch 850, loss[loss=0.1533, simple_loss=0.2309, pruned_loss=0.03788, over 4852.00 frames. ], tot_loss[loss=0.1909, simple_loss=0.2584, pruned_loss=0.06165, over 944027.51 frames. ], batch size: 31, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:26:03,120 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69601.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:26:21,359 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69629.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:26:24,227 INFO [finetune.py:976] (6/7) Epoch 13, batch 900, loss[loss=0.1852, simple_loss=0.2427, pruned_loss=0.06381, over 4695.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2569, pruned_loss=0.06125, over 948189.70 frames. ], batch size: 23, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:26:27,889 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.604e+02 1.856e+02 2.224e+02 3.601e+02, threshold=3.711e+02, percent-clipped=0.0 2023-03-26 15:26:55,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=69674.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:26:56,713 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69676.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:27:06,463 INFO [finetune.py:976] (6/7) Epoch 13, batch 950, loss[loss=0.1461, simple_loss=0.1993, pruned_loss=0.04644, over 4032.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.256, pruned_loss=0.06161, over 950585.55 frames. ], batch size: 17, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:27:17,686 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7618, 1.6742, 1.5887, 1.6903, 1.2504, 4.2385, 1.6525, 1.9959], device='cuda:6'), covar=tensor([0.3229, 0.2312, 0.2079, 0.2280, 0.1765, 0.0116, 0.2641, 0.1293], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0119, 0.0123, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:27:29,380 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69699.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:02,975 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69724.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:04,254 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1909, 1.3489, 1.4191, 0.7433, 1.2912, 1.5929, 1.6250, 1.2715], device='cuda:6'), covar=tensor([0.0843, 0.0565, 0.0426, 0.0463, 0.0426, 0.0529, 0.0280, 0.0607], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0152, 0.0122, 0.0129, 0.0129, 0.0125, 0.0143, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.2631e-05, 1.1102e-04, 8.7890e-05, 9.3204e-05, 9.1623e-05, 9.0798e-05, 1.0433e-04, 1.0527e-04], device='cuda:6') 2023-03-26 15:28:08,354 INFO [finetune.py:976] (6/7) Epoch 13, batch 1000, loss[loss=0.2489, simple_loss=0.3233, pruned_loss=0.08721, over 4799.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2573, pruned_loss=0.06212, over 951933.71 frames. ], batch size: 51, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:28:10,722 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=69735.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:28:12,999 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.598e+02 1.856e+02 2.406e+02 4.029e+02, threshold=3.712e+02, percent-clipped=2.0 2023-03-26 15:28:18,442 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69747.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:20,806 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=69750.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:28:52,948 INFO [finetune.py:976] (6/7) Epoch 13, batch 1050, loss[loss=0.216, simple_loss=0.2941, pruned_loss=0.06893, over 4903.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2604, pruned_loss=0.06276, over 953138.64 frames. ], batch size: 36, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:29:18,258 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8572, 2.5631, 2.4075, 1.3166, 2.5060, 2.0557, 1.8883, 2.2476], device='cuda:6'), covar=tensor([0.1059, 0.0895, 0.1862, 0.2192, 0.1821, 0.2307, 0.2471, 0.1363], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0196, 0.0199, 0.0184, 0.0213, 0.0207, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:29:38,098 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69818.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:29:48,092 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69824.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:29:48,792 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 15:29:59,167 INFO [finetune.py:976] (6/7) Epoch 13, batch 1100, loss[loss=0.1618, simple_loss=0.2305, pruned_loss=0.04651, over 4749.00 frames. ], tot_loss[loss=0.1931, simple_loss=0.2609, pruned_loss=0.06265, over 953641.09 frames. ], batch size: 54, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:30:02,884 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.609e+02 1.898e+02 2.282e+02 6.010e+02, threshold=3.795e+02, percent-clipped=2.0 2023-03-26 15:30:26,676 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4620, 1.4879, 1.5191, 1.7646, 1.6465, 2.7053, 1.3852, 1.5773], device='cuda:6'), covar=tensor([0.0898, 0.1507, 0.1415, 0.0865, 0.1371, 0.0356, 0.1293, 0.1465], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:30:35,891 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69866.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:30:43,316 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69872.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:30:46,889 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5056, 2.4388, 1.9491, 2.7059, 2.4488, 2.0688, 3.0130, 2.4763], device='cuda:6'), covar=tensor([0.1419, 0.2374, 0.3428, 0.2785, 0.2778, 0.1893, 0.3035, 0.2135], device='cuda:6'), in_proj_covar=tensor([0.0178, 0.0187, 0.0232, 0.0254, 0.0242, 0.0199, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:30:51,917 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69881.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:30:53,491 INFO [finetune.py:976] (6/7) Epoch 13, batch 1150, loss[loss=0.2585, simple_loss=0.3129, pruned_loss=0.1021, over 4895.00 frames. ], tot_loss[loss=0.1932, simple_loss=0.2613, pruned_loss=0.06253, over 953338.62 frames. ], batch size: 43, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:31:12,320 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69901.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:31:26,351 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.25 vs. limit=5.0 2023-03-26 15:31:42,225 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69929.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:31:42,251 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69929.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 15:31:44,610 INFO [finetune.py:976] (6/7) Epoch 13, batch 1200, loss[loss=0.1724, simple_loss=0.2391, pruned_loss=0.05284, over 4841.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.2607, pruned_loss=0.06297, over 953633.06 frames. ], batch size: 49, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:31:48,756 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.603e+02 1.893e+02 2.321e+02 3.158e+02, threshold=3.786e+02, percent-clipped=0.0 2023-03-26 15:31:55,836 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69949.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:32:07,778 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7058, 0.7047, 1.7194, 1.5611, 1.4932, 1.3886, 1.5311, 1.5779], device='cuda:6'), covar=tensor([0.3513, 0.3922, 0.3398, 0.3569, 0.4546, 0.3448, 0.4124, 0.3054], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0237, 0.0255, 0.0261, 0.0257, 0.0233, 0.0275, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:32:13,230 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=69976.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:13,760 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=69977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:16,135 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5817, 1.4691, 1.3483, 1.6536, 1.5486, 1.6386, 0.8997, 1.3568], device='cuda:6'), covar=tensor([0.2162, 0.2069, 0.1879, 0.1542, 0.1635, 0.1169, 0.2686, 0.1929], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0206, 0.0209, 0.0189, 0.0239, 0.0182, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:32:17,839 INFO [finetune.py:976] (6/7) Epoch 13, batch 1250, loss[loss=0.1865, simple_loss=0.2456, pruned_loss=0.06368, over 4867.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2586, pruned_loss=0.0625, over 954207.63 frames. ], batch size: 31, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:32:38,910 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 15:32:46,508 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70024.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:46,558 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70024.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:32:50,177 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70030.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 15:32:52,397 INFO [finetune.py:976] (6/7) Epoch 13, batch 1300, loss[loss=0.1788, simple_loss=0.234, pruned_loss=0.06175, over 4152.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2559, pruned_loss=0.06192, over 955193.05 frames. ], batch size: 65, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:32:56,052 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.095e+02 1.649e+02 1.897e+02 2.309e+02 4.234e+02, threshold=3.795e+02, percent-clipped=2.0 2023-03-26 15:33:03,832 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70050.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:33:19,136 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70072.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:33:19,770 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7411, 1.7741, 2.2010, 2.0986, 1.9148, 4.4277, 1.4696, 1.9110], device='cuda:6'), covar=tensor([0.0935, 0.1715, 0.1140, 0.0936, 0.1563, 0.0181, 0.1506, 0.1675], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:33:25,839 INFO [finetune.py:976] (6/7) Epoch 13, batch 1350, loss[loss=0.172, simple_loss=0.2442, pruned_loss=0.04984, over 4843.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2551, pruned_loss=0.06152, over 955472.49 frames. ], batch size: 33, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:33:36,445 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:33:53,164 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70110.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:34:08,077 INFO [finetune.py:976] (6/7) Epoch 13, batch 1400, loss[loss=0.1961, simple_loss=0.2811, pruned_loss=0.05559, over 4848.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2579, pruned_loss=0.06244, over 954492.46 frames. ], batch size: 47, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:34:12,155 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.202e+02 1.588e+02 1.939e+02 2.393e+02 8.943e+02, threshold=3.877e+02, percent-clipped=1.0 2023-03-26 15:34:23,615 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8377, 1.9995, 2.0462, 1.4369, 2.0558, 2.1145, 2.0594, 1.7462], device='cuda:6'), covar=tensor([0.0721, 0.0591, 0.0728, 0.0939, 0.0666, 0.0788, 0.0640, 0.1101], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0130, 0.0140, 0.0122, 0.0121, 0.0140, 0.0140, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:34:34,231 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=70171.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:34:41,772 INFO [finetune.py:976] (6/7) Epoch 13, batch 1450, loss[loss=0.166, simple_loss=0.2309, pruned_loss=0.05053, over 4730.00 frames. ], tot_loss[loss=0.1928, simple_loss=0.2597, pruned_loss=0.063, over 954460.06 frames. ], batch size: 23, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:34:50,195 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6363, 3.7213, 3.5965, 1.7896, 3.9115, 2.9900, 0.9303, 2.5958], device='cuda:6'), covar=tensor([0.2683, 0.1653, 0.1468, 0.3299, 0.1029, 0.0931, 0.4186, 0.1442], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0129, 0.0157, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 15:35:19,952 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7681, 1.2336, 0.9637, 1.5908, 2.1711, 1.2373, 1.4356, 1.5610], device='cuda:6'), covar=tensor([0.1459, 0.2163, 0.2029, 0.1224, 0.1862, 0.2057, 0.1566, 0.1958], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0113, 0.0092, 0.0120, 0.0094, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:35:21,186 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4849, 1.0775, 0.7969, 1.3230, 1.9207, 0.7092, 1.2277, 1.4319], device='cuda:6'), covar=tensor([0.1480, 0.2150, 0.1900, 0.1206, 0.1989, 0.2025, 0.1526, 0.1915], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0113, 0.0092, 0.0120, 0.0094, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:35:23,506 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2394, 1.4023, 1.4655, 1.5820, 1.5108, 2.8885, 1.2291, 1.4748], device='cuda:6'), covar=tensor([0.0944, 0.1714, 0.1137, 0.0897, 0.1566, 0.0277, 0.1396, 0.1651], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:35:26,432 INFO [finetune.py:976] (6/7) Epoch 13, batch 1500, loss[loss=0.193, simple_loss=0.2533, pruned_loss=0.06638, over 4054.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2611, pruned_loss=0.06323, over 952814.37 frames. ], batch size: 17, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:35:30,136 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.613e+02 1.899e+02 2.364e+02 4.350e+02, threshold=3.798e+02, percent-clipped=1.0 2023-03-26 15:35:46,878 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:35:46,895 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8465, 1.7814, 1.6661, 1.8869, 2.1705, 1.9591, 1.6625, 1.5213], device='cuda:6'), covar=tensor([0.1903, 0.1827, 0.1697, 0.1518, 0.1807, 0.1084, 0.2343, 0.1720], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0206, 0.0209, 0.0189, 0.0239, 0.0182, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:36:10,537 INFO [finetune.py:976] (6/7) Epoch 13, batch 1550, loss[loss=0.1379, simple_loss=0.2018, pruned_loss=0.03697, over 4243.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2617, pruned_loss=0.06371, over 952961.16 frames. ], batch size: 18, lr: 3.61e-03, grad_scale: 32.0 2023-03-26 15:36:11,968 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-26 15:36:49,644 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=70321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:36:58,942 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70330.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 15:37:00,631 INFO [finetune.py:976] (6/7) Epoch 13, batch 1600, loss[loss=0.1583, simple_loss=0.2352, pruned_loss=0.04065, over 4750.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2592, pruned_loss=0.06295, over 953502.09 frames. ], batch size: 26, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:37:04,747 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.529e+02 1.873e+02 2.318e+02 5.550e+02, threshold=3.745e+02, percent-clipped=4.0 2023-03-26 15:37:09,099 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7214, 1.5724, 1.5137, 1.7205, 1.2781, 3.7099, 1.5020, 1.9808], device='cuda:6'), covar=tensor([0.3121, 0.2365, 0.2134, 0.2211, 0.1654, 0.0150, 0.2434, 0.1196], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:37:30,809 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70378.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 15:37:34,185 INFO [finetune.py:976] (6/7) Epoch 13, batch 1650, loss[loss=0.1825, simple_loss=0.2469, pruned_loss=0.05904, over 4767.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2564, pruned_loss=0.06192, over 953873.84 frames. ], batch size: 26, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:38:08,087 INFO [finetune.py:976] (6/7) Epoch 13, batch 1700, loss[loss=0.1535, simple_loss=0.2275, pruned_loss=0.03979, over 4777.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2542, pruned_loss=0.06091, over 955097.42 frames. ], batch size: 26, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:38:11,732 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.265e+02 1.610e+02 1.926e+02 2.276e+02 4.227e+02, threshold=3.852e+02, percent-clipped=1.0 2023-03-26 15:38:12,971 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2847, 1.8906, 2.6635, 1.6638, 2.3419, 2.4649, 1.8684, 2.5918], device='cuda:6'), covar=tensor([0.1448, 0.1846, 0.1534, 0.2306, 0.0845, 0.1606, 0.2451, 0.0889], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0205, 0.0194, 0.0193, 0.0178, 0.0214, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:38:13,549 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2368, 3.6588, 3.9129, 4.0664, 3.9763, 3.7661, 4.2992, 1.4421], device='cuda:6'), covar=tensor([0.0784, 0.0813, 0.0781, 0.0981, 0.1238, 0.1476, 0.0696, 0.5475], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0247, 0.0280, 0.0296, 0.0334, 0.0285, 0.0306, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:38:13,566 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7043, 1.2571, 0.8639, 1.6568, 2.0113, 1.5323, 1.4713, 1.7366], device='cuda:6'), covar=tensor([0.1481, 0.2050, 0.2067, 0.1173, 0.2019, 0.1982, 0.1400, 0.1757], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0096, 0.0114, 0.0093, 0.0121, 0.0095, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 15:38:30,204 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:38:41,456 INFO [finetune.py:976] (6/7) Epoch 13, batch 1750, loss[loss=0.2316, simple_loss=0.3047, pruned_loss=0.07926, over 4813.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2556, pruned_loss=0.06189, over 954621.36 frames. ], batch size: 45, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:38:44,482 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3319, 1.1351, 1.0178, 1.1182, 1.5768, 1.4503, 1.3033, 1.1177], device='cuda:6'), covar=tensor([0.0282, 0.0331, 0.0761, 0.0359, 0.0239, 0.0392, 0.0287, 0.0407], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0110, 0.0141, 0.0114, 0.0102, 0.0106, 0.0096, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.3437e-05, 8.5330e-05, 1.1177e-04, 8.9122e-05, 7.9577e-05, 7.9067e-05, 7.2720e-05, 8.4665e-05], device='cuda:6') 2023-03-26 15:39:00,807 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.31 vs. limit=5.0 2023-03-26 15:39:14,415 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6452, 0.7157, 1.6379, 1.5007, 1.4533, 1.4132, 1.4191, 1.5567], device='cuda:6'), covar=tensor([0.3755, 0.4238, 0.3624, 0.3980, 0.4723, 0.3635, 0.4537, 0.3313], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0238, 0.0255, 0.0262, 0.0260, 0.0234, 0.0276, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:39:24,239 INFO [finetune.py:976] (6/7) Epoch 13, batch 1800, loss[loss=0.1821, simple_loss=0.256, pruned_loss=0.05404, over 4834.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2585, pruned_loss=0.06225, over 955180.59 frames. ], batch size: 47, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:39:28,348 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.316e+01 1.597e+02 2.051e+02 2.548e+02 3.844e+02, threshold=4.101e+02, percent-clipped=0.0 2023-03-26 15:39:29,074 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4481, 1.2855, 1.6286, 2.4659, 1.7144, 2.2104, 0.8463, 2.0719], device='cuda:6'), covar=tensor([0.1709, 0.1599, 0.1182, 0.0757, 0.0915, 0.1164, 0.1703, 0.0706], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0118, 0.0135, 0.0167, 0.0102, 0.0140, 0.0128, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 15:39:42,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9526, 4.1431, 3.9741, 2.1699, 4.2212, 3.2105, 0.8878, 2.8827], device='cuda:6'), covar=tensor([0.2690, 0.1324, 0.1341, 0.3003, 0.0778, 0.0916, 0.4372, 0.1384], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0160, 0.0129, 0.0157, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 15:39:58,066 INFO [finetune.py:976] (6/7) Epoch 13, batch 1850, loss[loss=0.2033, simple_loss=0.2733, pruned_loss=0.06666, over 4817.00 frames. ], tot_loss[loss=0.1945, simple_loss=0.262, pruned_loss=0.06348, over 954760.09 frames. ], batch size: 38, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:40:02,401 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 15:40:26,913 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=70616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:40:28,627 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5302, 1.4279, 1.8322, 2.7476, 1.9800, 2.0780, 1.2194, 2.2135], device='cuda:6'), covar=tensor([0.1746, 0.1415, 0.1177, 0.0679, 0.0775, 0.1286, 0.1496, 0.0663], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0117, 0.0135, 0.0166, 0.0102, 0.0140, 0.0128, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0004, 0.0003], device='cuda:6') 2023-03-26 15:40:42,677 INFO [finetune.py:976] (6/7) Epoch 13, batch 1900, loss[loss=0.1986, simple_loss=0.2737, pruned_loss=0.06168, over 4704.00 frames. ], tot_loss[loss=0.1947, simple_loss=0.2627, pruned_loss=0.06337, over 954013.99 frames. ], batch size: 59, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:40:46,778 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.570e+02 1.884e+02 2.217e+02 6.026e+02, threshold=3.769e+02, percent-clipped=2.0 2023-03-26 15:41:27,334 INFO [finetune.py:976] (6/7) Epoch 13, batch 1950, loss[loss=0.2297, simple_loss=0.2822, pruned_loss=0.08859, over 4719.00 frames. ], tot_loss[loss=0.1929, simple_loss=0.2607, pruned_loss=0.06251, over 953042.91 frames. ], batch size: 59, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:41:34,554 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7159, 1.9185, 1.5737, 1.6652, 2.2503, 1.9705, 1.8463, 1.8070], device='cuda:6'), covar=tensor([0.0396, 0.0325, 0.0535, 0.0312, 0.0242, 0.0607, 0.0317, 0.0365], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0140, 0.0113, 0.0101, 0.0105, 0.0095, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.2937e-05, 8.4819e-05, 1.1104e-04, 8.8209e-05, 7.8768e-05, 7.8023e-05, 7.2183e-05, 8.4054e-05], device='cuda:6') 2023-03-26 15:42:06,901 INFO [finetune.py:976] (6/7) Epoch 13, batch 2000, loss[loss=0.1917, simple_loss=0.2644, pruned_loss=0.05952, over 4936.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2572, pruned_loss=0.06129, over 951085.65 frames. ], batch size: 38, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:42:15,812 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.535e+02 1.807e+02 2.194e+02 3.140e+02, threshold=3.615e+02, percent-clipped=0.0 2023-03-26 15:42:18,595 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 15:42:26,195 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.61 vs. limit=5.0 2023-03-26 15:42:36,881 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:42:42,184 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6719, 1.4997, 1.5097, 1.5843, 0.9992, 3.1352, 1.2129, 1.6115], device='cuda:6'), covar=tensor([0.3246, 0.2410, 0.2005, 0.2326, 0.1945, 0.0214, 0.2755, 0.1328], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0114, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:42:48,490 INFO [finetune.py:976] (6/7) Epoch 13, batch 2050, loss[loss=0.164, simple_loss=0.2272, pruned_loss=0.05033, over 4664.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2543, pruned_loss=0.06012, over 951783.98 frames. ], batch size: 23, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:42:57,507 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2831, 1.2065, 1.1560, 1.3729, 1.5660, 1.3433, 1.2722, 1.1245], device='cuda:6'), covar=tensor([0.0377, 0.0318, 0.0588, 0.0277, 0.0214, 0.0551, 0.0370, 0.0434], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0140, 0.0114, 0.0101, 0.0105, 0.0096, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3027e-05, 8.4777e-05, 1.1129e-04, 8.8364e-05, 7.9042e-05, 7.7992e-05, 7.2314e-05, 8.3946e-05], device='cuda:6') 2023-03-26 15:43:09,368 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70814.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:43:22,315 INFO [finetune.py:976] (6/7) Epoch 13, batch 2100, loss[loss=0.218, simple_loss=0.2815, pruned_loss=0.07723, over 4765.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2546, pruned_loss=0.06134, over 951223.87 frames. ], batch size: 59, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:43:26,465 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.827e+01 1.609e+02 1.892e+02 2.240e+02 3.187e+02, threshold=3.783e+02, percent-clipped=0.0 2023-03-26 15:43:35,600 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4077, 1.2578, 1.4532, 0.8995, 1.4938, 1.4584, 1.3851, 1.1199], device='cuda:6'), covar=tensor([0.0636, 0.0865, 0.0659, 0.0914, 0.0784, 0.0685, 0.0668, 0.1571], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0131, 0.0140, 0.0123, 0.0122, 0.0140, 0.0140, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:43:50,010 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 15:43:56,100 INFO [finetune.py:976] (6/7) Epoch 13, batch 2150, loss[loss=0.2603, simple_loss=0.3227, pruned_loss=0.09896, over 4915.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.2583, pruned_loss=0.06309, over 950599.69 frames. ], batch size: 36, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:44:35,058 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=70916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:44:46,790 INFO [finetune.py:976] (6/7) Epoch 13, batch 2200, loss[loss=0.183, simple_loss=0.2408, pruned_loss=0.06264, over 4777.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2585, pruned_loss=0.06259, over 951915.56 frames. ], batch size: 29, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:44:50,484 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.701e+02 1.958e+02 2.316e+02 4.574e+02, threshold=3.916e+02, percent-clipped=1.0 2023-03-26 15:44:51,753 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 15:44:55,155 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1881, 2.0393, 2.4155, 1.7237, 2.1524, 2.3697, 1.9799, 2.4519], device='cuda:6'), covar=tensor([0.1314, 0.1497, 0.1145, 0.1569, 0.0759, 0.1155, 0.2072, 0.0791], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0205, 0.0194, 0.0191, 0.0178, 0.0214, 0.0216, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:44:57,565 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6252, 1.7100, 2.1188, 1.9572, 1.8895, 4.1631, 1.5847, 1.7769], device='cuda:6'), covar=tensor([0.0892, 0.1677, 0.1217, 0.0939, 0.1413, 0.0198, 0.1421, 0.1642], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:45:07,668 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=70964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:45:18,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6305, 1.1826, 0.8457, 1.5711, 2.0408, 1.3395, 1.4077, 1.5711], device='cuda:6'), covar=tensor([0.1565, 0.2181, 0.2006, 0.1256, 0.1994, 0.2029, 0.1418, 0.1924], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0096, 0.0113, 0.0093, 0.0121, 0.0095, 0.0099, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:45:19,186 INFO [finetune.py:976] (6/7) Epoch 13, batch 2250, loss[loss=0.1991, simple_loss=0.2583, pruned_loss=0.06994, over 4776.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2588, pruned_loss=0.06242, over 951090.95 frames. ], batch size: 28, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:45:21,039 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 15:45:26,616 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=70992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:46:02,069 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:46:03,707 INFO [finetune.py:976] (6/7) Epoch 13, batch 2300, loss[loss=0.1951, simple_loss=0.253, pruned_loss=0.06858, over 4920.00 frames. ], tot_loss[loss=0.1925, simple_loss=0.2596, pruned_loss=0.06274, over 951400.12 frames. ], batch size: 38, lr: 3.60e-03, grad_scale: 64.0 2023-03-26 15:46:08,245 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.237e+01 1.685e+02 2.000e+02 2.324e+02 3.629e+02, threshold=3.999e+02, percent-clipped=0.0 2023-03-26 15:46:23,775 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71053.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:46:47,371 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9222, 1.4227, 0.8456, 1.7847, 2.3128, 1.4467, 1.6324, 1.6871], device='cuda:6'), covar=tensor([0.1312, 0.2051, 0.2012, 0.1146, 0.1759, 0.1842, 0.1353, 0.1919], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0092, 0.0120, 0.0094, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:46:59,589 INFO [finetune.py:976] (6/7) Epoch 13, batch 2350, loss[loss=0.1775, simple_loss=0.2536, pruned_loss=0.05072, over 4902.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2578, pruned_loss=0.06188, over 952349.93 frames. ], batch size: 36, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:47:10,738 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71091.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:47:29,260 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8000, 1.7747, 1.9822, 1.2630, 1.9052, 1.9333, 1.9227, 1.5808], device='cuda:6'), covar=tensor([0.0686, 0.0746, 0.0689, 0.0914, 0.0836, 0.0761, 0.0653, 0.1253], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0142, 0.0124, 0.0124, 0.0142, 0.0141, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:47:44,843 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-26 15:47:59,110 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4636, 1.5646, 1.8817, 1.8927, 1.6211, 3.5161, 1.3867, 1.5980], device='cuda:6'), covar=tensor([0.0979, 0.1754, 0.1054, 0.0953, 0.1585, 0.0231, 0.1504, 0.1723], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:48:00,797 INFO [finetune.py:976] (6/7) Epoch 13, batch 2400, loss[loss=0.1453, simple_loss=0.2164, pruned_loss=0.03713, over 4855.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2546, pruned_loss=0.06038, over 953456.24 frames. ], batch size: 49, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:48:09,286 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.076e+01 1.502e+02 1.791e+02 2.104e+02 3.987e+02, threshold=3.583e+02, percent-clipped=0.0 2023-03-26 15:49:05,618 INFO [finetune.py:976] (6/7) Epoch 13, batch 2450, loss[loss=0.2006, simple_loss=0.2669, pruned_loss=0.06713, over 4904.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.252, pruned_loss=0.05957, over 954673.27 frames. ], batch size: 43, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:49:14,022 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9403, 1.8945, 2.0285, 1.3212, 2.0401, 2.1220, 2.0097, 1.7074], device='cuda:6'), covar=tensor([0.0702, 0.0651, 0.0704, 0.0969, 0.0683, 0.0748, 0.0623, 0.1072], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0133, 0.0143, 0.0125, 0.0124, 0.0143, 0.0142, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:50:04,533 INFO [finetune.py:976] (6/7) Epoch 13, batch 2500, loss[loss=0.1725, simple_loss=0.2351, pruned_loss=0.05495, over 4819.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2527, pruned_loss=0.06035, over 954125.66 frames. ], batch size: 30, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:50:08,818 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.134e+02 1.629e+02 1.890e+02 2.415e+02 4.682e+02, threshold=3.780e+02, percent-clipped=4.0 2023-03-26 15:50:41,419 INFO [finetune.py:976] (6/7) Epoch 13, batch 2550, loss[loss=0.1623, simple_loss=0.2359, pruned_loss=0.0444, over 4825.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2576, pruned_loss=0.06195, over 952959.41 frames. ], batch size: 33, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:51:17,410 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5840, 1.4535, 1.8388, 1.1828, 1.6776, 1.8557, 1.3792, 2.0103], device='cuda:6'), covar=tensor([0.1347, 0.2255, 0.1376, 0.1900, 0.1012, 0.1380, 0.3196, 0.0881], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0205, 0.0194, 0.0191, 0.0178, 0.0215, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:51:22,577 INFO [finetune.py:976] (6/7) Epoch 13, batch 2600, loss[loss=0.2495, simple_loss=0.3034, pruned_loss=0.09776, over 4762.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.259, pruned_loss=0.06197, over 953699.52 frames. ], batch size: 59, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:51:26,871 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.678e+02 1.922e+02 2.428e+02 5.321e+02, threshold=3.843e+02, percent-clipped=3.0 2023-03-26 15:51:31,782 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:51:55,372 INFO [finetune.py:976] (6/7) Epoch 13, batch 2650, loss[loss=0.1811, simple_loss=0.2651, pruned_loss=0.04851, over 4822.00 frames. ], tot_loss[loss=0.1922, simple_loss=0.26, pruned_loss=0.06215, over 953483.93 frames. ], batch size: 49, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:51:58,311 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71386.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:51:58,973 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0064, 1.7431, 2.4481, 3.9316, 2.7725, 2.7302, 1.0438, 3.1840], device='cuda:6'), covar=tensor([0.1716, 0.1536, 0.1354, 0.0578, 0.0738, 0.1666, 0.1899, 0.0484], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 15:52:27,986 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-26 15:52:29,326 INFO [finetune.py:976] (6/7) Epoch 13, batch 2700, loss[loss=0.163, simple_loss=0.2326, pruned_loss=0.0467, over 4770.00 frames. ], tot_loss[loss=0.1906, simple_loss=0.2587, pruned_loss=0.06131, over 953207.82 frames. ], batch size: 26, lr: 3.60e-03, grad_scale: 32.0 2023-03-26 15:52:31,103 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7120, 3.8594, 3.5941, 1.9059, 3.9259, 2.7590, 0.8088, 2.6786], device='cuda:6'), covar=tensor([0.2460, 0.1966, 0.1584, 0.3237, 0.1106, 0.1167, 0.4638, 0.1591], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0160, 0.0129, 0.0157, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 15:52:34,539 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.578e+02 1.884e+02 2.307e+02 4.300e+02, threshold=3.769e+02, percent-clipped=2.0 2023-03-26 15:53:02,924 INFO [finetune.py:976] (6/7) Epoch 13, batch 2750, loss[loss=0.1884, simple_loss=0.2554, pruned_loss=0.06068, over 4830.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2565, pruned_loss=0.06089, over 953720.50 frames. ], batch size: 33, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:53:13,258 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 15:53:14,991 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8458, 1.6609, 1.5537, 2.0313, 2.1663, 1.9776, 1.4482, 1.4674], device='cuda:6'), covar=tensor([0.2291, 0.2110, 0.2072, 0.1586, 0.1728, 0.1112, 0.2571, 0.2021], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0208, 0.0211, 0.0190, 0.0241, 0.0183, 0.0214, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:53:26,753 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 15:53:36,644 INFO [finetune.py:976] (6/7) Epoch 13, batch 2800, loss[loss=0.162, simple_loss=0.228, pruned_loss=0.04803, over 4829.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2545, pruned_loss=0.0607, over 953458.10 frames. ], batch size: 39, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:53:40,881 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.564e+02 1.863e+02 2.304e+02 3.302e+02, threshold=3.726e+02, percent-clipped=0.0 2023-03-26 15:53:45,018 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6243, 1.4582, 1.2979, 1.6434, 1.5696, 1.5960, 1.0054, 1.3439], device='cuda:6'), covar=tensor([0.2005, 0.2009, 0.1888, 0.1538, 0.1611, 0.1212, 0.2417, 0.1837], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0207, 0.0210, 0.0190, 0.0240, 0.0183, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:54:23,057 INFO [finetune.py:976] (6/7) Epoch 13, batch 2850, loss[loss=0.2164, simple_loss=0.2742, pruned_loss=0.07932, over 4766.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2534, pruned_loss=0.06033, over 955224.60 frames. ], batch size: 28, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:54:52,360 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:55:04,224 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 15:55:06,323 INFO [finetune.py:976] (6/7) Epoch 13, batch 2900, loss[loss=0.1755, simple_loss=0.2528, pruned_loss=0.04909, over 4825.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2571, pruned_loss=0.06163, over 955521.21 frames. ], batch size: 33, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:55:15,495 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.635e+01 1.661e+02 1.944e+02 2.530e+02 6.475e+02, threshold=3.888e+02, percent-clipped=5.0 2023-03-26 15:55:24,627 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=71648.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:55:37,982 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8444, 1.7998, 1.7022, 2.1075, 2.1900, 2.1149, 1.5503, 1.5113], device='cuda:6'), covar=tensor([0.2052, 0.1855, 0.1663, 0.1470, 0.1647, 0.1030, 0.2325, 0.1831], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0208, 0.0210, 0.0190, 0.0241, 0.0183, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:55:49,606 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:55:58,581 INFO [finetune.py:976] (6/7) Epoch 13, batch 2950, loss[loss=0.2253, simple_loss=0.2927, pruned_loss=0.07894, over 4823.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2603, pruned_loss=0.06247, over 956545.75 frames. ], batch size: 51, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:56:00,488 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=71686.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:56:09,945 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1066, 1.6334, 2.2656, 1.4767, 2.2148, 2.2416, 1.5691, 2.4173], device='cuda:6'), covar=tensor([0.1259, 0.2126, 0.1449, 0.2172, 0.0819, 0.1523, 0.3044, 0.0791], device='cuda:6'), in_proj_covar=tensor([0.0197, 0.0207, 0.0196, 0.0193, 0.0180, 0.0217, 0.0219, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:56:11,124 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=71696.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:56:39,793 INFO [finetune.py:976] (6/7) Epoch 13, batch 3000, loss[loss=0.2312, simple_loss=0.3046, pruned_loss=0.07889, over 4812.00 frames. ], tot_loss[loss=0.1946, simple_loss=0.2623, pruned_loss=0.06347, over 958245.19 frames. ], batch size: 40, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:56:39,794 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 15:56:44,078 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3823, 1.6299, 1.5381, 1.6692, 1.6578, 2.9728, 1.4341, 1.5980], device='cuda:6'), covar=tensor([0.0896, 0.1621, 0.0982, 0.0851, 0.1421, 0.0295, 0.1352, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:56:50,409 INFO [finetune.py:1010] (6/7) Epoch 13, validation: loss=0.1572, simple_loss=0.2278, pruned_loss=0.04333, over 2265189.00 frames. 2023-03-26 15:56:50,410 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 15:56:51,091 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=71734.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:56:54,609 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1628, 2.0071, 1.5917, 1.9187, 1.9935, 1.7649, 2.3097, 2.1581], device='cuda:6'), covar=tensor([0.1267, 0.2078, 0.3185, 0.2747, 0.2661, 0.1670, 0.3311, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0179, 0.0187, 0.0233, 0.0254, 0.0244, 0.0199, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 15:56:55,669 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.204e+02 1.624e+02 1.953e+02 2.376e+02 4.887e+02, threshold=3.907e+02, percent-clipped=1.0 2023-03-26 15:57:06,513 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-26 15:57:22,728 INFO [finetune.py:976] (6/7) Epoch 13, batch 3050, loss[loss=0.1896, simple_loss=0.2586, pruned_loss=0.06028, over 4825.00 frames. ], tot_loss[loss=0.1952, simple_loss=0.2631, pruned_loss=0.06364, over 958713.66 frames. ], batch size: 49, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:57:43,204 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 15:57:44,785 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4067, 1.5380, 1.8279, 1.7554, 1.6041, 3.2432, 1.2582, 1.5869], device='cuda:6'), covar=tensor([0.0931, 0.1674, 0.1138, 0.0920, 0.1496, 0.0251, 0.1471, 0.1603], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:57:52,071 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-26 15:57:55,478 INFO [finetune.py:976] (6/7) Epoch 13, batch 3100, loss[loss=0.1553, simple_loss=0.2229, pruned_loss=0.04384, over 4821.00 frames. ], tot_loss[loss=0.193, simple_loss=0.2606, pruned_loss=0.06276, over 957891.78 frames. ], batch size: 38, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:58:01,083 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.937e+01 1.560e+02 1.843e+02 2.215e+02 5.565e+02, threshold=3.687e+02, percent-clipped=1.0 2023-03-26 15:58:09,087 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 15:58:21,886 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8222, 1.7115, 1.6052, 1.7175, 1.2637, 4.2508, 1.7051, 2.2847], device='cuda:6'), covar=tensor([0.3920, 0.2927, 0.2321, 0.2799, 0.1720, 0.0190, 0.2363, 0.1087], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 15:58:26,281 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 15:58:29,156 INFO [finetune.py:976] (6/7) Epoch 13, batch 3150, loss[loss=0.2057, simple_loss=0.2686, pruned_loss=0.07144, over 4930.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2577, pruned_loss=0.0624, over 957082.37 frames. ], batch size: 43, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:58:53,742 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=71919.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:59:03,052 INFO [finetune.py:976] (6/7) Epoch 13, batch 3200, loss[loss=0.1825, simple_loss=0.2587, pruned_loss=0.05314, over 4899.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2541, pruned_loss=0.06068, over 956259.43 frames. ], batch size: 43, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 15:59:07,316 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.561e+02 1.912e+02 2.265e+02 3.518e+02, threshold=3.824e+02, percent-clipped=0.0 2023-03-26 15:59:36,948 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 15:59:38,344 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9794, 4.1294, 3.9365, 2.1921, 4.2106, 3.1494, 1.0553, 2.9252], device='cuda:6'), covar=tensor([0.2317, 0.2459, 0.1511, 0.3185, 0.0925, 0.0967, 0.4607, 0.1540], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0173, 0.0158, 0.0129, 0.0156, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 15:59:40,765 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=71972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:59:54,357 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=71980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 15:59:56,074 INFO [finetune.py:976] (6/7) Epoch 13, batch 3250, loss[loss=0.1779, simple_loss=0.2537, pruned_loss=0.05104, over 4846.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2552, pruned_loss=0.06144, over 957031.96 frames. ], batch size: 44, lr: 3.59e-03, grad_scale: 32.0 2023-03-26 16:00:18,579 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 16:00:27,634 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.19 vs. limit=5.0 2023-03-26 16:00:39,651 INFO [finetune.py:976] (6/7) Epoch 13, batch 3300, loss[loss=0.1931, simple_loss=0.2689, pruned_loss=0.05871, over 4819.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.258, pruned_loss=0.06244, over 953250.50 frames. ], batch size: 33, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:00:44,480 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.095e+02 1.593e+02 1.995e+02 2.341e+02 5.205e+02, threshold=3.991e+02, percent-clipped=4.0 2023-03-26 16:01:02,427 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6143, 1.2446, 0.7656, 1.4564, 1.9028, 1.1822, 1.3596, 1.5407], device='cuda:6'), covar=tensor([0.1588, 0.2106, 0.2144, 0.1313, 0.2165, 0.2172, 0.1536, 0.2008], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0092, 0.0120, 0.0094, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:01:29,182 INFO [finetune.py:976] (6/7) Epoch 13, batch 3350, loss[loss=0.1927, simple_loss=0.2661, pruned_loss=0.05969, over 4779.00 frames. ], tot_loss[loss=0.1926, simple_loss=0.2603, pruned_loss=0.06244, over 954496.91 frames. ], batch size: 54, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:01:29,313 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6651, 1.6484, 1.6001, 1.1108, 1.7300, 1.9092, 1.8516, 1.4745], device='cuda:6'), covar=tensor([0.0888, 0.0618, 0.0606, 0.0499, 0.0474, 0.0549, 0.0350, 0.0661], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0153, 0.0123, 0.0129, 0.0131, 0.0127, 0.0143, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3715e-05, 1.1160e-04, 8.8271e-05, 9.3198e-05, 9.2648e-05, 9.2259e-05, 1.0411e-04, 1.0582e-04], device='cuda:6') 2023-03-26 16:01:45,244 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4042, 2.2315, 1.9138, 0.9495, 2.1209, 1.8685, 1.6653, 1.9854], device='cuda:6'), covar=tensor([0.1143, 0.0818, 0.1834, 0.2093, 0.1609, 0.2276, 0.2438, 0.1194], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0196, 0.0200, 0.0185, 0.0213, 0.0207, 0.0222, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:02:07,917 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:02:11,043 INFO [finetune.py:976] (6/7) Epoch 13, batch 3400, loss[loss=0.1491, simple_loss=0.2166, pruned_loss=0.04078, over 4725.00 frames. ], tot_loss[loss=0.1944, simple_loss=0.2617, pruned_loss=0.06352, over 952912.67 frames. ], batch size: 23, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:02:16,764 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.708e+02 2.008e+02 2.371e+02 4.954e+02, threshold=4.015e+02, percent-clipped=4.0 2023-03-26 16:02:20,414 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5015, 1.5010, 1.6506, 0.9125, 1.5851, 1.8172, 1.8062, 1.4391], device='cuda:6'), covar=tensor([0.0960, 0.0700, 0.0424, 0.0567, 0.0396, 0.0569, 0.0356, 0.0726], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0153, 0.0123, 0.0129, 0.0131, 0.0127, 0.0143, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3806e-05, 1.1148e-04, 8.8217e-05, 9.3187e-05, 9.2649e-05, 9.2092e-05, 1.0401e-04, 1.0600e-04], device='cuda:6') 2023-03-26 16:02:49,871 INFO [finetune.py:976] (6/7) Epoch 13, batch 3450, loss[loss=0.1899, simple_loss=0.2617, pruned_loss=0.05905, over 4800.00 frames. ], tot_loss[loss=0.1938, simple_loss=0.2616, pruned_loss=0.063, over 953830.23 frames. ], batch size: 25, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:02:55,194 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72190.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:03:06,565 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5458, 1.1265, 0.7878, 1.3959, 1.9842, 0.6625, 1.2937, 1.4451], device='cuda:6'), covar=tensor([0.1486, 0.2050, 0.1762, 0.1181, 0.1869, 0.1934, 0.1450, 0.1826], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0112, 0.0092, 0.0120, 0.0094, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:03:23,345 INFO [finetune.py:976] (6/7) Epoch 13, batch 3500, loss[loss=0.1454, simple_loss=0.2199, pruned_loss=0.03542, over 4902.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2587, pruned_loss=0.0626, over 953050.03 frames. ], batch size: 36, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:03:29,065 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.641e+02 1.993e+02 2.438e+02 4.377e+02, threshold=3.986e+02, percent-clipped=2.0 2023-03-26 16:03:49,436 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 16:03:49,886 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:03:51,652 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72275.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:03:54,132 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3451, 1.5493, 1.6394, 1.7282, 1.5162, 3.1449, 1.3848, 1.5526], device='cuda:6'), covar=tensor([0.0927, 0.1750, 0.1171, 0.0910, 0.1508, 0.0253, 0.1378, 0.1711], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:03:56,425 INFO [finetune.py:976] (6/7) Epoch 13, batch 3550, loss[loss=0.1617, simple_loss=0.2351, pruned_loss=0.0441, over 4772.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2542, pruned_loss=0.06051, over 953434.14 frames. ], batch size: 28, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:04:37,462 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=72320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:04:47,547 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:04:51,694 INFO [finetune.py:976] (6/7) Epoch 13, batch 3600, loss[loss=0.1565, simple_loss=0.2335, pruned_loss=0.03978, over 4762.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.253, pruned_loss=0.06025, over 954816.43 frames. ], batch size: 28, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:04:55,637 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 16:04:58,281 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.845e+01 1.525e+02 1.754e+02 2.048e+02 3.586e+02, threshold=3.507e+02, percent-clipped=0.0 2023-03-26 16:05:42,445 INFO [finetune.py:976] (6/7) Epoch 13, batch 3650, loss[loss=0.2162, simple_loss=0.2857, pruned_loss=0.07331, over 4829.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2555, pruned_loss=0.06131, over 953061.41 frames. ], batch size: 40, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:05:50,446 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72387.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:05:51,106 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2283, 2.1142, 1.7732, 1.9495, 2.1673, 1.8996, 2.3000, 2.1347], device='cuda:6'), covar=tensor([0.1437, 0.2035, 0.3210, 0.2577, 0.2711, 0.1849, 0.2454, 0.1951], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0254, 0.0245, 0.0199, 0.0213, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:06:13,446 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2149, 2.1131, 1.6737, 2.1911, 2.1441, 1.8718, 2.4707, 2.1797], device='cuda:6'), covar=tensor([0.1341, 0.2178, 0.3220, 0.2590, 0.2688, 0.1725, 0.2963, 0.1763], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0254, 0.0245, 0.0200, 0.0213, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:06:54,367 INFO [finetune.py:976] (6/7) Epoch 13, batch 3700, loss[loss=0.1917, simple_loss=0.2669, pruned_loss=0.05826, over 4819.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2563, pruned_loss=0.0607, over 952921.47 frames. ], batch size: 39, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:07:04,379 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.616e+02 1.915e+02 2.308e+02 4.437e+02, threshold=3.829e+02, percent-clipped=1.0 2023-03-26 16:07:52,817 INFO [finetune.py:976] (6/7) Epoch 13, batch 3750, loss[loss=0.2225, simple_loss=0.2996, pruned_loss=0.0727, over 4850.00 frames. ], tot_loss[loss=0.1917, simple_loss=0.2592, pruned_loss=0.06215, over 952596.47 frames. ], batch size: 49, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:07:54,135 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72485.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:08:29,282 INFO [finetune.py:976] (6/7) Epoch 13, batch 3800, loss[loss=0.1599, simple_loss=0.2422, pruned_loss=0.03879, over 4921.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2598, pruned_loss=0.06188, over 952890.69 frames. ], batch size: 42, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:08:34,664 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.445e+01 1.578e+02 1.803e+02 2.155e+02 3.901e+02, threshold=3.607e+02, percent-clipped=1.0 2023-03-26 16:08:50,107 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-26 16:08:56,820 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72575.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:09:02,584 INFO [finetune.py:976] (6/7) Epoch 13, batch 3850, loss[loss=0.1416, simple_loss=0.2233, pruned_loss=0.02993, over 4776.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2584, pruned_loss=0.06107, over 953010.79 frames. ], batch size: 28, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:09:04,566 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3871, 2.8799, 2.6209, 1.3431, 2.8058, 2.4163, 2.2564, 2.5537], device='cuda:6'), covar=tensor([0.0971, 0.0890, 0.2012, 0.2239, 0.1537, 0.2302, 0.2221, 0.1180], device='cuda:6'), in_proj_covar=tensor([0.0165, 0.0196, 0.0200, 0.0185, 0.0212, 0.0207, 0.0222, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:09:13,619 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8686, 1.9548, 1.6962, 1.6808, 2.3992, 2.1679, 1.9695, 1.8610], device='cuda:6'), covar=tensor([0.0331, 0.0345, 0.0505, 0.0374, 0.0241, 0.0459, 0.0317, 0.0364], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0140, 0.0112, 0.0101, 0.0105, 0.0095, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2802e-05, 8.4593e-05, 1.1072e-04, 8.7312e-05, 7.8945e-05, 7.8021e-05, 7.1660e-05, 8.2941e-05], device='cuda:6') 2023-03-26 16:09:35,397 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=72623.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:09:46,306 INFO [finetune.py:976] (6/7) Epoch 13, batch 3900, loss[loss=0.1736, simple_loss=0.2447, pruned_loss=0.05123, over 4911.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2552, pruned_loss=0.05979, over 954470.05 frames. ], batch size: 43, lr: 3.59e-03, grad_scale: 16.0 2023-03-26 16:09:51,186 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.402e+01 1.493e+02 1.751e+02 2.217e+02 3.590e+02, threshold=3.501e+02, percent-clipped=0.0 2023-03-26 16:09:53,539 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6381, 3.5034, 3.3040, 1.4545, 3.5785, 2.6301, 0.9876, 2.3665], device='cuda:6'), covar=tensor([0.2172, 0.1962, 0.1648, 0.3677, 0.1112, 0.1043, 0.4134, 0.1660], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0172, 0.0158, 0.0127, 0.0155, 0.0120, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 16:10:18,074 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=72682.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:10:18,645 INFO [finetune.py:976] (6/7) Epoch 13, batch 3950, loss[loss=0.1591, simple_loss=0.227, pruned_loss=0.0456, over 4760.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2517, pruned_loss=0.05867, over 953030.40 frames. ], batch size: 28, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:10:48,382 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9761, 4.1392, 3.9515, 2.1450, 4.3215, 3.1415, 1.0552, 2.9025], device='cuda:6'), covar=tensor([0.2046, 0.1556, 0.1311, 0.2918, 0.0796, 0.0928, 0.4165, 0.1269], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0174, 0.0159, 0.0128, 0.0156, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 16:10:50,771 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9934, 1.8264, 1.7786, 1.9712, 1.5454, 4.6933, 1.7997, 2.4020], device='cuda:6'), covar=tensor([0.3386, 0.2581, 0.2075, 0.2257, 0.1632, 0.0135, 0.2309, 0.1167], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:11:10,484 INFO [finetune.py:976] (6/7) Epoch 13, batch 4000, loss[loss=0.1798, simple_loss=0.2519, pruned_loss=0.05383, over 4859.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2497, pruned_loss=0.05808, over 953123.90 frames. ], batch size: 49, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:11:16,800 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.220e+02 1.596e+02 1.921e+02 2.181e+02 4.609e+02, threshold=3.842e+02, percent-clipped=3.0 2023-03-26 16:11:44,632 INFO [finetune.py:976] (6/7) Epoch 13, batch 4050, loss[loss=0.2119, simple_loss=0.2853, pruned_loss=0.06923, over 4912.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2544, pruned_loss=0.05994, over 952799.13 frames. ], batch size: 43, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:11:46,495 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72785.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:11:53,679 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0115, 4.7428, 4.4791, 2.8315, 4.9338, 3.4993, 0.6640, 3.2808], device='cuda:6'), covar=tensor([0.2273, 0.1950, 0.1481, 0.2733, 0.0825, 0.0938, 0.4845, 0.1388], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0173, 0.0159, 0.0127, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 16:12:39,938 INFO [finetune.py:976] (6/7) Epoch 13, batch 4100, loss[loss=0.1869, simple_loss=0.2544, pruned_loss=0.0597, over 4818.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2569, pruned_loss=0.05985, over 955602.44 frames. ], batch size: 40, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:12:40,000 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=72833.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:12:45,295 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.592e+02 1.875e+02 2.230e+02 3.624e+02, threshold=3.749e+02, percent-clipped=0.0 2023-03-26 16:13:13,291 INFO [finetune.py:976] (6/7) Epoch 13, batch 4150, loss[loss=0.2115, simple_loss=0.2733, pruned_loss=0.07487, over 4725.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2582, pruned_loss=0.06071, over 955616.93 frames. ], batch size: 54, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:13:26,386 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72894.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:13:33,976 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6998, 1.4308, 2.0839, 3.3708, 2.3396, 2.4196, 0.9068, 2.7229], device='cuda:6'), covar=tensor([0.1743, 0.1584, 0.1329, 0.0598, 0.0817, 0.1328, 0.2045, 0.0535], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0132, 0.0163, 0.0100, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:13:35,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7413, 1.9225, 2.2730, 2.0248, 1.9364, 4.4630, 1.6622, 2.0143], device='cuda:6'), covar=tensor([0.0912, 0.1627, 0.1002, 0.0947, 0.1489, 0.0154, 0.1399, 0.1566], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:14:03,787 INFO [finetune.py:976] (6/7) Epoch 13, batch 4200, loss[loss=0.1858, simple_loss=0.2609, pruned_loss=0.05535, over 4765.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2585, pruned_loss=0.06036, over 956378.53 frames. ], batch size: 28, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:14:08,711 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.693e+01 1.496e+02 1.812e+02 2.169e+02 4.504e+02, threshold=3.624e+02, percent-clipped=2.0 2023-03-26 16:14:18,724 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=72955.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:14:35,613 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=72964.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:14:52,531 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=72982.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:14:53,057 INFO [finetune.py:976] (6/7) Epoch 13, batch 4250, loss[loss=0.1457, simple_loss=0.21, pruned_loss=0.04075, over 4161.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2561, pruned_loss=0.05999, over 955336.31 frames. ], batch size: 18, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:14:56,861 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5181, 1.3776, 1.5943, 0.9887, 1.5666, 1.5507, 1.5826, 1.2023], device='cuda:6'), covar=tensor([0.0648, 0.0948, 0.0675, 0.0971, 0.0886, 0.0759, 0.0698, 0.1794], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0131, 0.0140, 0.0122, 0.0122, 0.0140, 0.0139, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:14:58,109 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7107, 2.2967, 2.8964, 1.8031, 2.5098, 2.8745, 2.1302, 3.1187], device='cuda:6'), covar=tensor([0.1302, 0.1904, 0.1492, 0.2248, 0.0887, 0.1371, 0.2445, 0.0796], device='cuda:6'), in_proj_covar=tensor([0.0195, 0.0205, 0.0192, 0.0191, 0.0178, 0.0213, 0.0216, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:15:21,392 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73025.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 16:15:24,377 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:15:26,658 INFO [finetune.py:976] (6/7) Epoch 13, batch 4300, loss[loss=0.2062, simple_loss=0.2595, pruned_loss=0.07644, over 4910.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.254, pruned_loss=0.05937, over 955408.80 frames. ], batch size: 43, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:15:31,992 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.498e+02 1.782e+02 2.254e+02 4.055e+02, threshold=3.563e+02, percent-clipped=2.0 2023-03-26 16:15:59,433 INFO [finetune.py:976] (6/7) Epoch 13, batch 4350, loss[loss=0.1627, simple_loss=0.2322, pruned_loss=0.04656, over 4912.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2518, pruned_loss=0.05862, over 956884.49 frames. ], batch size: 36, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:16:34,969 INFO [finetune.py:976] (6/7) Epoch 13, batch 4400, loss[loss=0.2027, simple_loss=0.2719, pruned_loss=0.06678, over 4814.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2541, pruned_loss=0.0609, over 954486.82 frames. ], batch size: 33, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:16:40,307 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.328e+01 1.433e+02 1.829e+02 2.142e+02 3.915e+02, threshold=3.659e+02, percent-clipped=1.0 2023-03-26 16:16:44,599 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8784, 1.7626, 1.5340, 1.7121, 1.6957, 1.6679, 1.7160, 2.3746], device='cuda:6'), covar=tensor([0.4072, 0.4369, 0.3487, 0.4031, 0.4118, 0.2690, 0.3994, 0.1857], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0260, 0.0224, 0.0277, 0.0246, 0.0212, 0.0248, 0.0222], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:17:06,979 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5827, 1.4639, 1.4499, 1.4603, 0.9725, 2.9941, 1.2117, 1.7257], device='cuda:6'), covar=tensor([0.3595, 0.2731, 0.2271, 0.2615, 0.2063, 0.0270, 0.2666, 0.1305], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0098, 0.0098, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:17:08,723 INFO [finetune.py:976] (6/7) Epoch 13, batch 4450, loss[loss=0.177, simple_loss=0.2516, pruned_loss=0.05125, over 4768.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2568, pruned_loss=0.06148, over 952969.14 frames. ], batch size: 26, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:17:53,145 INFO [finetune.py:976] (6/7) Epoch 13, batch 4500, loss[loss=0.2296, simple_loss=0.2932, pruned_loss=0.08306, over 4793.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2583, pruned_loss=0.06198, over 950941.93 frames. ], batch size: 45, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:17:58,018 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.220e+02 1.732e+02 2.105e+02 2.505e+02 4.470e+02, threshold=4.210e+02, percent-clipped=3.0 2023-03-26 16:18:04,479 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73250.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:18:08,492 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.47 vs. limit=5.0 2023-03-26 16:18:26,891 INFO [finetune.py:976] (6/7) Epoch 13, batch 4550, loss[loss=0.2332, simple_loss=0.2966, pruned_loss=0.08486, over 4790.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2607, pruned_loss=0.0631, over 951048.27 frames. ], batch size: 45, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:18:29,491 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3500, 1.4467, 1.7639, 1.6529, 1.4708, 3.3702, 1.2793, 1.4991], device='cuda:6'), covar=tensor([0.1025, 0.1788, 0.1133, 0.1017, 0.1767, 0.0212, 0.1554, 0.1782], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0081, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:18:34,961 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:19:07,532 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73320.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:19:15,999 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8627, 1.7533, 1.5040, 1.5103, 1.6281, 1.6095, 1.7087, 2.3286], device='cuda:6'), covar=tensor([0.3667, 0.3736, 0.3087, 0.3573, 0.3813, 0.2332, 0.3327, 0.1552], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0224, 0.0279, 0.0248, 0.0213, 0.0249, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:19:19,979 INFO [finetune.py:976] (6/7) Epoch 13, batch 4600, loss[loss=0.1959, simple_loss=0.2514, pruned_loss=0.07021, over 4863.00 frames. ], tot_loss[loss=0.1918, simple_loss=0.2592, pruned_loss=0.06217, over 952361.34 frames. ], batch size: 31, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:19:24,889 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.809e+01 1.600e+02 1.901e+02 2.194e+02 3.702e+02, threshold=3.803e+02, percent-clipped=0.0 2023-03-26 16:19:42,002 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73354.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:19:48,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2884, 2.3524, 1.8021, 2.5525, 2.2822, 2.0362, 2.9670, 2.4234], device='cuda:6'), covar=tensor([0.1361, 0.2484, 0.3225, 0.2725, 0.2623, 0.1640, 0.2910, 0.1891], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0188, 0.0235, 0.0255, 0.0244, 0.0199, 0.0213, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:20:11,746 INFO [finetune.py:976] (6/7) Epoch 13, batch 4650, loss[loss=0.1518, simple_loss=0.2224, pruned_loss=0.04057, over 4767.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2563, pruned_loss=0.061, over 953938.20 frames. ], batch size: 28, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:20:28,076 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6645, 1.5683, 1.5375, 1.5760, 0.8636, 2.9325, 1.0401, 1.6218], device='cuda:6'), covar=tensor([0.3133, 0.2324, 0.2046, 0.2251, 0.1922, 0.0254, 0.2582, 0.1211], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:20:31,762 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0701, 1.7664, 1.7617, 2.0222, 2.6586, 2.0322, 2.1088, 1.5689], device='cuda:6'), covar=tensor([0.2489, 0.2500, 0.2302, 0.2052, 0.1963, 0.1372, 0.2261, 0.2236], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0206, 0.0209, 0.0190, 0.0239, 0.0183, 0.0212, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:20:34,684 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 16:20:45,674 INFO [finetune.py:976] (6/7) Epoch 13, batch 4700, loss[loss=0.2054, simple_loss=0.2717, pruned_loss=0.06956, over 4005.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2532, pruned_loss=0.05976, over 953623.00 frames. ], batch size: 17, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:20:46,028 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.71 vs. limit=2.0 2023-03-26 16:20:50,434 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.614e+02 1.909e+02 2.257e+02 3.771e+02, threshold=3.817e+02, percent-clipped=0.0 2023-03-26 16:21:18,783 INFO [finetune.py:976] (6/7) Epoch 13, batch 4750, loss[loss=0.2039, simple_loss=0.2675, pruned_loss=0.0701, over 4874.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2521, pruned_loss=0.05985, over 955612.39 frames. ], batch size: 34, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:21:36,195 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0446, 0.9749, 0.9171, 1.1434, 1.2110, 1.1455, 1.0047, 0.9326], device='cuda:6'), covar=tensor([0.0374, 0.0319, 0.0651, 0.0290, 0.0302, 0.0445, 0.0326, 0.0415], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0108, 0.0138, 0.0111, 0.0100, 0.0104, 0.0095, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2444e-05, 8.3537e-05, 1.0937e-04, 8.6621e-05, 7.8057e-05, 7.6996e-05, 7.1396e-05, 8.2820e-05], device='cuda:6') 2023-03-26 16:21:51,902 INFO [finetune.py:976] (6/7) Epoch 13, batch 4800, loss[loss=0.1846, simple_loss=0.2596, pruned_loss=0.05481, over 4752.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2552, pruned_loss=0.06119, over 956601.77 frames. ], batch size: 54, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:21:57,193 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.637e+02 2.007e+02 2.318e+02 3.852e+02, threshold=4.014e+02, percent-clipped=1.0 2023-03-26 16:22:03,327 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:22:20,830 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6704, 1.7501, 1.9577, 1.7970, 1.8325, 3.2750, 1.5810, 1.8041], device='cuda:6'), covar=tensor([0.0918, 0.1592, 0.0948, 0.0937, 0.1484, 0.0295, 0.1429, 0.1549], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:22:24,807 INFO [finetune.py:976] (6/7) Epoch 13, batch 4850, loss[loss=0.2144, simple_loss=0.293, pruned_loss=0.06795, over 4843.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2584, pruned_loss=0.06195, over 953748.69 frames. ], batch size: 49, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:22:30,085 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73590.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:22:37,221 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73598.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:23:00,188 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73620.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:23:08,518 INFO [finetune.py:976] (6/7) Epoch 13, batch 4900, loss[loss=0.1595, simple_loss=0.2326, pruned_loss=0.04327, over 4911.00 frames. ], tot_loss[loss=0.1913, simple_loss=0.2588, pruned_loss=0.06193, over 952834.91 frames. ], batch size: 29, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:23:14,278 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.751e+02 2.108e+02 2.596e+02 5.059e+02, threshold=4.217e+02, percent-clipped=3.0 2023-03-26 16:23:19,735 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73649.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:23:21,039 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73651.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:23:31,884 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73668.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:23:36,282 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 16:23:41,384 INFO [finetune.py:976] (6/7) Epoch 13, batch 4950, loss[loss=0.1766, simple_loss=0.2484, pruned_loss=0.05241, over 4882.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2597, pruned_loss=0.06202, over 953768.20 frames. ], batch size: 32, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:24:24,441 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=73732.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:24:24,741 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 16:24:24,939 INFO [finetune.py:976] (6/7) Epoch 13, batch 5000, loss[loss=0.1954, simple_loss=0.258, pruned_loss=0.06639, over 4812.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.258, pruned_loss=0.06127, over 954250.05 frames. ], batch size: 41, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:24:33,743 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.586e+02 1.888e+02 2.371e+02 3.310e+02, threshold=3.776e+02, percent-clipped=1.0 2023-03-26 16:24:41,331 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9994, 2.0611, 2.2075, 1.5418, 2.2037, 2.2520, 2.2831, 1.7990], device='cuda:6'), covar=tensor([0.0580, 0.0566, 0.0604, 0.0848, 0.0602, 0.0648, 0.0525, 0.0991], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0132, 0.0140, 0.0122, 0.0122, 0.0140, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:24:47,768 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9443, 1.2697, 1.7264, 1.7926, 1.5893, 1.5684, 1.7235, 1.7155], device='cuda:6'), covar=tensor([0.4107, 0.4482, 0.4161, 0.4258, 0.5747, 0.4577, 0.5067, 0.4059], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0236, 0.0254, 0.0263, 0.0261, 0.0235, 0.0275, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:25:17,335 INFO [finetune.py:976] (6/7) Epoch 13, batch 5050, loss[loss=0.1905, simple_loss=0.2536, pruned_loss=0.06372, over 4921.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2547, pruned_loss=0.06059, over 955072.75 frames. ], batch size: 43, lr: 3.58e-03, grad_scale: 16.0 2023-03-26 16:25:26,642 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=73793.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:25:53,900 INFO [finetune.py:976] (6/7) Epoch 13, batch 5100, loss[loss=0.1704, simple_loss=0.2474, pruned_loss=0.04667, over 4833.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2515, pruned_loss=0.05904, over 957198.98 frames. ], batch size: 47, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:25:59,155 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.804e+01 1.435e+02 1.752e+02 2.086e+02 3.868e+02, threshold=3.504e+02, percent-clipped=1.0 2023-03-26 16:26:27,642 INFO [finetune.py:976] (6/7) Epoch 13, batch 5150, loss[loss=0.1995, simple_loss=0.2693, pruned_loss=0.06489, over 4842.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2519, pruned_loss=0.05917, over 957494.09 frames. ], batch size: 49, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:26:38,546 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 16:26:47,159 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0016, 2.0125, 2.1265, 1.6116, 1.9731, 2.1261, 2.1271, 1.6872], device='cuda:6'), covar=tensor([0.0573, 0.0606, 0.0611, 0.0844, 0.0703, 0.0704, 0.0585, 0.1065], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0142, 0.0124, 0.0123, 0.0142, 0.0142, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:26:50,207 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0001, 1.8390, 1.7429, 1.9763, 1.7493, 4.6915, 1.7390, 2.3582], device='cuda:6'), covar=tensor([0.3341, 0.2466, 0.2100, 0.2203, 0.1467, 0.0094, 0.2507, 0.1231], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0123, 0.0115, 0.0098, 0.0098, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:27:01,332 INFO [finetune.py:976] (6/7) Epoch 13, batch 5200, loss[loss=0.2069, simple_loss=0.2819, pruned_loss=0.06598, over 4903.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2561, pruned_loss=0.06067, over 955543.86 frames. ], batch size: 36, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:27:06,219 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.675e+02 1.952e+02 2.217e+02 3.649e+02, threshold=3.904e+02, percent-clipped=2.0 2023-03-26 16:27:09,800 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=73946.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:27:11,691 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=73949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:27:34,704 INFO [finetune.py:976] (6/7) Epoch 13, batch 5250, loss[loss=0.177, simple_loss=0.2451, pruned_loss=0.05444, over 4928.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2584, pruned_loss=0.06159, over 954658.81 frames. ], batch size: 33, lr: 3.57e-03, grad_scale: 16.0 2023-03-26 16:27:44,368 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=73997.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:27:45,073 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8888, 1.4160, 2.0051, 1.8861, 1.6698, 1.5913, 1.8113, 1.7782], device='cuda:6'), covar=tensor([0.4168, 0.4352, 0.3605, 0.4043, 0.5035, 0.4089, 0.4575, 0.3425], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0238, 0.0257, 0.0265, 0.0263, 0.0237, 0.0277, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:28:07,200 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6594, 1.5068, 2.1708, 3.4325, 2.3454, 2.3605, 0.9989, 2.7927], device='cuda:6'), covar=tensor([0.1805, 0.1485, 0.1304, 0.0551, 0.0787, 0.1395, 0.1947, 0.0533], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0116, 0.0134, 0.0166, 0.0101, 0.0139, 0.0127, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:28:09,083 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3730, 1.6077, 1.6510, 1.1035, 1.5410, 1.7818, 1.8460, 1.4341], device='cuda:6'), covar=tensor([0.0932, 0.0532, 0.0505, 0.0465, 0.0444, 0.0557, 0.0344, 0.0715], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0151, 0.0122, 0.0128, 0.0130, 0.0125, 0.0141, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.2434e-05, 1.1054e-04, 8.7727e-05, 9.2245e-05, 9.2159e-05, 9.0280e-05, 1.0252e-04, 1.0499e-04], device='cuda:6') 2023-03-26 16:28:11,314 INFO [finetune.py:976] (6/7) Epoch 13, batch 5300, loss[loss=0.1444, simple_loss=0.2141, pruned_loss=0.03735, over 4758.00 frames. ], tot_loss[loss=0.1919, simple_loss=0.2596, pruned_loss=0.0621, over 954232.36 frames. ], batch size: 28, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:28:17,125 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.638e+02 2.059e+02 2.515e+02 4.122e+02, threshold=4.117e+02, percent-clipped=3.0 2023-03-26 16:28:17,901 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9504, 1.8517, 1.6288, 2.0630, 2.4026, 2.1120, 1.5600, 1.5918], device='cuda:6'), covar=tensor([0.2167, 0.2040, 0.2078, 0.1620, 0.1714, 0.1110, 0.2495, 0.1928], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0206, 0.0208, 0.0189, 0.0238, 0.0182, 0.0212, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:28:23,797 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4633, 1.2696, 1.9186, 3.0389, 2.0039, 2.3267, 0.9676, 2.5477], device='cuda:6'), covar=tensor([0.2050, 0.1928, 0.1535, 0.0922, 0.1026, 0.1347, 0.2219, 0.0774], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0115, 0.0133, 0.0165, 0.0101, 0.0138, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:28:33,130 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74064.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:28:45,017 INFO [finetune.py:976] (6/7) Epoch 13, batch 5350, loss[loss=0.2056, simple_loss=0.2681, pruned_loss=0.07152, over 4888.00 frames. ], tot_loss[loss=0.192, simple_loss=0.2599, pruned_loss=0.06205, over 955661.33 frames. ], batch size: 32, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:28:48,549 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:28:56,172 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3389, 1.3967, 1.5649, 1.5812, 1.5019, 3.1263, 1.2747, 1.5518], device='cuda:6'), covar=tensor([0.0984, 0.1799, 0.1223, 0.0967, 0.1585, 0.0244, 0.1471, 0.1660], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:28:57,634 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.66 vs. limit=5.0 2023-03-26 16:29:12,980 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74125.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:29:18,158 INFO [finetune.py:976] (6/7) Epoch 13, batch 5400, loss[loss=0.1673, simple_loss=0.2314, pruned_loss=0.05158, over 4928.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2579, pruned_loss=0.06157, over 954823.93 frames. ], batch size: 38, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:29:27,917 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.534e+02 1.853e+02 2.261e+02 4.254e+02, threshold=3.706e+02, percent-clipped=1.0 2023-03-26 16:30:11,887 INFO [finetune.py:976] (6/7) Epoch 13, batch 5450, loss[loss=0.1463, simple_loss=0.2122, pruned_loss=0.04016, over 4729.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2553, pruned_loss=0.06093, over 954510.94 frames. ], batch size: 59, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:30:13,795 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0503, 1.8552, 1.7570, 2.0077, 2.7334, 1.9678, 2.0641, 1.5498], device='cuda:6'), covar=tensor([0.2347, 0.2323, 0.2169, 0.1835, 0.1763, 0.1370, 0.2266, 0.2111], device='cuda:6'), in_proj_covar=tensor([0.0236, 0.0206, 0.0208, 0.0189, 0.0238, 0.0182, 0.0211, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:30:41,957 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.68 vs. limit=5.0 2023-03-26 16:30:56,460 INFO [finetune.py:976] (6/7) Epoch 13, batch 5500, loss[loss=0.1283, simple_loss=0.1958, pruned_loss=0.03033, over 4743.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2507, pruned_loss=0.05847, over 956074.30 frames. ], batch size: 23, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:31:01,348 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74240.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:01,849 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.610e+01 1.534e+02 1.911e+02 2.187e+02 5.924e+02, threshold=3.822e+02, percent-clipped=2.0 2023-03-26 16:31:04,990 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:30,480 INFO [finetune.py:976] (6/7) Epoch 13, batch 5550, loss[loss=0.1762, simple_loss=0.2534, pruned_loss=0.04956, over 4762.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2519, pruned_loss=0.05913, over 957513.00 frames. ], batch size: 26, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:31:37,729 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74294.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:31:42,487 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74301.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:32:02,278 INFO [finetune.py:976] (6/7) Epoch 13, batch 5600, loss[loss=0.2084, simple_loss=0.2633, pruned_loss=0.07675, over 4801.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2554, pruned_loss=0.06023, over 956491.75 frames. ], batch size: 51, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:32:06,849 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.439e+01 1.528e+02 1.833e+02 2.251e+02 4.644e+02, threshold=3.666e+02, percent-clipped=1.0 2023-03-26 16:32:31,594 INFO [finetune.py:976] (6/7) Epoch 13, batch 5650, loss[loss=0.2167, simple_loss=0.2887, pruned_loss=0.07237, over 4709.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2571, pruned_loss=0.06004, over 957170.86 frames. ], batch size: 59, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:32:35,019 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:32:54,214 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74420.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:32:57,281 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 16:33:00,744 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2706, 4.2370, 4.0049, 2.8642, 4.3287, 3.4888, 1.7038, 3.1704], device='cuda:6'), covar=tensor([0.2296, 0.2228, 0.1712, 0.2812, 0.1012, 0.0985, 0.4130, 0.1569], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0173, 0.0161, 0.0128, 0.0156, 0.0121, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 16:33:01,856 INFO [finetune.py:976] (6/7) Epoch 13, batch 5700, loss[loss=0.1402, simple_loss=0.2084, pruned_loss=0.03598, over 4316.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2548, pruned_loss=0.06079, over 940587.20 frames. ], batch size: 18, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:33:03,649 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74436.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:33:06,486 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.627e+01 1.487e+02 1.916e+02 2.529e+02 4.839e+02, threshold=3.833e+02, percent-clipped=5.0 2023-03-26 16:33:31,118 INFO [finetune.py:976] (6/7) Epoch 14, batch 0, loss[loss=0.1656, simple_loss=0.2285, pruned_loss=0.05133, over 4810.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2285, pruned_loss=0.05133, over 4810.00 frames. ], batch size: 25, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:33:31,118 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 16:33:41,691 INFO [finetune.py:1010] (6/7) Epoch 14, validation: loss=0.1582, simple_loss=0.2295, pruned_loss=0.04344, over 2265189.00 frames. 2023-03-26 16:33:41,691 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 16:33:53,663 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6071, 1.5314, 1.4724, 1.6425, 1.0135, 3.3478, 1.2332, 1.7484], device='cuda:6'), covar=tensor([0.3511, 0.2604, 0.2322, 0.2523, 0.2127, 0.0263, 0.2684, 0.1339], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0123, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:34:14,920 INFO [finetune.py:976] (6/7) Epoch 14, batch 50, loss[loss=0.2039, simple_loss=0.2712, pruned_loss=0.06833, over 4903.00 frames. ], tot_loss[loss=0.1988, simple_loss=0.2646, pruned_loss=0.06649, over 216491.55 frames. ], batch size: 46, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:34:42,638 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.553e+01 1.578e+02 1.920e+02 2.248e+02 3.729e+02, threshold=3.841e+02, percent-clipped=1.0 2023-03-26 16:35:03,794 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1578, 1.9197, 1.3923, 0.5805, 1.5866, 1.8456, 1.6558, 1.7560], device='cuda:6'), covar=tensor([0.0989, 0.0841, 0.1599, 0.2069, 0.1433, 0.2514, 0.2312, 0.0980], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0196, 0.0199, 0.0185, 0.0212, 0.0208, 0.0222, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:35:04,283 INFO [finetune.py:976] (6/7) Epoch 14, batch 100, loss[loss=0.1958, simple_loss=0.2495, pruned_loss=0.07103, over 4826.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2551, pruned_loss=0.06135, over 378875.66 frames. ], batch size: 33, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:35:05,184 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 16:35:16,324 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1530, 1.9942, 1.6738, 1.9601, 1.8610, 1.8550, 1.9020, 2.6643], device='cuda:6'), covar=tensor([0.4351, 0.5053, 0.3905, 0.4623, 0.4665, 0.2727, 0.4563, 0.1825], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0225, 0.0277, 0.0247, 0.0213, 0.0248, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:35:31,056 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:35:31,616 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74596.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:35:49,901 INFO [finetune.py:976] (6/7) Epoch 14, batch 150, loss[loss=0.1914, simple_loss=0.2563, pruned_loss=0.06321, over 4936.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2494, pruned_loss=0.05926, over 507406.19 frames. ], batch size: 33, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:36:22,704 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.552e+02 1.795e+02 2.139e+02 3.747e+02, threshold=3.589e+02, percent-clipped=0.0 2023-03-26 16:36:33,041 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74656.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:36:35,947 INFO [finetune.py:976] (6/7) Epoch 14, batch 200, loss[loss=0.1739, simple_loss=0.2469, pruned_loss=0.05045, over 4170.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2499, pruned_loss=0.05976, over 605888.12 frames. ], batch size: 65, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:36:49,135 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8761, 1.3861, 0.8543, 1.6184, 2.0996, 1.5250, 1.5775, 1.7847], device='cuda:6'), covar=tensor([0.1364, 0.1996, 0.1974, 0.1172, 0.1920, 0.1989, 0.1361, 0.1769], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:36:55,115 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6010, 1.4494, 1.6581, 1.7633, 1.4853, 3.3652, 1.3353, 1.5114], device='cuda:6'), covar=tensor([0.0913, 0.1820, 0.1210, 0.0986, 0.1709, 0.0238, 0.1523, 0.1813], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0080, 0.0073, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:37:02,943 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5314, 1.4500, 1.4229, 1.5762, 1.0365, 3.3585, 1.1242, 1.6150], device='cuda:6'), covar=tensor([0.3481, 0.2609, 0.2238, 0.2440, 0.1966, 0.0197, 0.2882, 0.1409], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0098, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:37:09,842 INFO [finetune.py:976] (6/7) Epoch 14, batch 250, loss[loss=0.22, simple_loss=0.2894, pruned_loss=0.07534, over 4816.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2521, pruned_loss=0.05994, over 684213.36 frames. ], batch size: 39, lr: 3.57e-03, grad_scale: 32.0 2023-03-26 16:37:15,967 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:37:30,097 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.719e+02 2.081e+02 2.468e+02 4.342e+02, threshold=4.162e+02, percent-clipped=2.0 2023-03-26 16:37:42,696 INFO [finetune.py:976] (6/7) Epoch 14, batch 300, loss[loss=0.1847, simple_loss=0.2296, pruned_loss=0.06996, over 4049.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2547, pruned_loss=0.06046, over 743670.92 frames. ], batch size: 17, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:37:43,337 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2605, 2.0616, 2.2334, 1.0815, 2.5140, 2.6217, 2.2965, 2.0472], device='cuda:6'), covar=tensor([0.0924, 0.0699, 0.0411, 0.0655, 0.0384, 0.0553, 0.0428, 0.0604], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0153, 0.0123, 0.0130, 0.0131, 0.0127, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3224e-05, 1.1182e-04, 8.8622e-05, 9.3363e-05, 9.2616e-05, 9.1726e-05, 1.0341e-04, 1.0595e-04], device='cuda:6') 2023-03-26 16:37:48,016 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74768.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:38:16,372 INFO [finetune.py:976] (6/7) Epoch 14, batch 350, loss[loss=0.2077, simple_loss=0.2692, pruned_loss=0.07313, over 4856.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.256, pruned_loss=0.06096, over 790922.94 frames. ], batch size: 44, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:38:36,814 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.618e+01 1.648e+02 1.967e+02 2.475e+02 5.107e+02, threshold=3.933e+02, percent-clipped=3.0 2023-03-26 16:38:49,810 INFO [finetune.py:976] (6/7) Epoch 14, batch 400, loss[loss=0.2402, simple_loss=0.287, pruned_loss=0.09672, over 4850.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2577, pruned_loss=0.06196, over 827682.48 frames. ], batch size: 31, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:39:11,456 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.61 vs. limit=5.0 2023-03-26 16:39:13,575 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=74896.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:22,422 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74909.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:23,516 INFO [finetune.py:976] (6/7) Epoch 14, batch 450, loss[loss=0.225, simple_loss=0.2976, pruned_loss=0.07623, over 4722.00 frames. ], tot_loss[loss=0.1903, simple_loss=0.2571, pruned_loss=0.06177, over 856498.82 frames. ], batch size: 54, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:39:43,596 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.587e+02 1.868e+02 2.260e+02 4.285e+02, threshold=3.737e+02, percent-clipped=2.0 2023-03-26 16:39:45,928 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=74944.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:50,643 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=74951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:39:58,630 INFO [finetune.py:976] (6/7) Epoch 14, batch 500, loss[loss=0.1792, simple_loss=0.2426, pruned_loss=0.05791, over 4932.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2556, pruned_loss=0.06153, over 879015.08 frames. ], batch size: 38, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:40:00,600 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=74964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:40:00,665 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 16:40:08,964 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=74970.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:40:45,476 INFO [finetune.py:976] (6/7) Epoch 14, batch 550, loss[loss=0.1642, simple_loss=0.2436, pruned_loss=0.04243, over 4824.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2526, pruned_loss=0.06038, over 895745.96 frames. ], batch size: 33, lr: 3.56e-03, grad_scale: 32.0 2023-03-26 16:40:57,859 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75025.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:41:00,187 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2298, 1.3272, 1.4094, 0.8602, 1.2772, 1.5391, 1.6230, 1.3084], device='cuda:6'), covar=tensor([0.0884, 0.0560, 0.0471, 0.0466, 0.0474, 0.0601, 0.0291, 0.0684], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0152, 0.0122, 0.0129, 0.0130, 0.0126, 0.0141, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.2576e-05, 1.1092e-04, 8.7761e-05, 9.2661e-05, 9.2127e-05, 9.1320e-05, 1.0267e-04, 1.0529e-04], device='cuda:6') 2023-03-26 16:41:09,612 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.564e+02 1.846e+02 2.204e+02 7.411e+02, threshold=3.691e+02, percent-clipped=3.0 2023-03-26 16:41:32,955 INFO [finetune.py:976] (6/7) Epoch 14, batch 600, loss[loss=0.1742, simple_loss=0.2518, pruned_loss=0.04829, over 4931.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2529, pruned_loss=0.06032, over 907596.40 frames. ], batch size: 38, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:42:10,228 INFO [finetune.py:976] (6/7) Epoch 14, batch 650, loss[loss=0.2292, simple_loss=0.3007, pruned_loss=0.07889, over 4902.00 frames. ], tot_loss[loss=0.1894, simple_loss=0.2565, pruned_loss=0.06112, over 917724.78 frames. ], batch size: 37, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:42:27,570 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3274, 2.1802, 2.4766, 1.6883, 2.3018, 2.5837, 2.4046, 1.9483], device='cuda:6'), covar=tensor([0.0600, 0.0642, 0.0595, 0.0868, 0.0625, 0.0603, 0.0629, 0.1006], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0133, 0.0142, 0.0123, 0.0124, 0.0141, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:42:30,910 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.620e+02 1.922e+02 2.248e+02 3.855e+02, threshold=3.845e+02, percent-clipped=1.0 2023-03-26 16:42:43,796 INFO [finetune.py:976] (6/7) Epoch 14, batch 700, loss[loss=0.1563, simple_loss=0.2354, pruned_loss=0.03867, over 4811.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2562, pruned_loss=0.06076, over 926068.27 frames. ], batch size: 45, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:42:45,100 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3852, 1.7360, 2.5369, 4.1725, 3.0216, 2.7153, 0.8566, 3.3700], device='cuda:6'), covar=tensor([0.1597, 0.1618, 0.1399, 0.0476, 0.0665, 0.1621, 0.2093, 0.0420], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0116, 0.0133, 0.0164, 0.0100, 0.0137, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:43:14,167 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8623, 1.7803, 1.6482, 2.0719, 2.2139, 2.0326, 1.5622, 1.5264], device='cuda:6'), covar=tensor([0.1994, 0.1792, 0.1679, 0.1477, 0.1625, 0.1034, 0.2234, 0.1841], device='cuda:6'), in_proj_covar=tensor([0.0237, 0.0205, 0.0208, 0.0188, 0.0237, 0.0183, 0.0212, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:43:16,881 INFO [finetune.py:976] (6/7) Epoch 14, batch 750, loss[loss=0.199, simple_loss=0.2602, pruned_loss=0.06886, over 4756.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2572, pruned_loss=0.06073, over 931668.12 frames. ], batch size: 27, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:43:28,269 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=75228.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:37,704 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.931e+01 1.583e+02 1.834e+02 2.163e+02 4.783e+02, threshold=3.668e+02, percent-clipped=1.0 2023-03-26 16:43:44,258 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:46,521 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 16:43:50,668 INFO [finetune.py:976] (6/7) Epoch 14, batch 800, loss[loss=0.2148, simple_loss=0.2791, pruned_loss=0.0753, over 4837.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.257, pruned_loss=0.06043, over 934250.72 frames. ], batch size: 49, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:43:53,657 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75265.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:55,532 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6060, 1.5523, 1.6355, 0.8945, 1.6630, 1.8851, 1.8450, 1.4100], device='cuda:6'), covar=tensor([0.0978, 0.0728, 0.0551, 0.0585, 0.0527, 0.0564, 0.0370, 0.0848], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0152, 0.0122, 0.0128, 0.0130, 0.0126, 0.0142, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.2693e-05, 1.1092e-04, 8.7454e-05, 9.2311e-05, 9.2180e-05, 9.1251e-05, 1.0286e-04, 1.0529e-04], device='cuda:6') 2023-03-26 16:43:56,662 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=75270.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:43:57,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4675, 1.4906, 1.7184, 1.7467, 1.6383, 3.3395, 1.3022, 1.6272], device='cuda:6'), covar=tensor([0.0987, 0.1794, 0.1103, 0.0969, 0.1522, 0.0221, 0.1490, 0.1714], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 16:44:09,559 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:16,560 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:24,289 INFO [finetune.py:976] (6/7) Epoch 14, batch 850, loss[loss=0.2258, simple_loss=0.2859, pruned_loss=0.08279, over 4777.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2559, pruned_loss=0.05998, over 938292.05 frames. ], batch size: 26, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:44:30,259 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:44:37,464 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=75331.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 16:44:44,944 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.704e+01 1.586e+02 1.985e+02 2.275e+02 3.825e+02, threshold=3.970e+02, percent-clipped=2.0 2023-03-26 16:44:57,417 INFO [finetune.py:976] (6/7) Epoch 14, batch 900, loss[loss=0.1923, simple_loss=0.2478, pruned_loss=0.0684, over 4942.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2528, pruned_loss=0.05902, over 942273.94 frames. ], batch size: 33, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:45:44,925 INFO [finetune.py:976] (6/7) Epoch 14, batch 950, loss[loss=0.1875, simple_loss=0.2617, pruned_loss=0.05662, over 4818.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2526, pruned_loss=0.0598, over 945278.06 frames. ], batch size: 38, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:45:50,484 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1062, 0.9229, 1.0390, 0.4675, 0.8293, 1.1260, 1.1806, 0.9851], device='cuda:6'), covar=tensor([0.0962, 0.0650, 0.0501, 0.0551, 0.0530, 0.0694, 0.0417, 0.0734], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0153, 0.0122, 0.0129, 0.0131, 0.0127, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.2945e-05, 1.1141e-04, 8.7772e-05, 9.2910e-05, 9.2785e-05, 9.1585e-05, 1.0336e-04, 1.0588e-04], device='cuda:6') 2023-03-26 16:45:52,297 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-26 16:46:05,784 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.942e+01 1.566e+02 1.871e+02 2.243e+02 4.539e+02, threshold=3.743e+02, percent-clipped=1.0 2023-03-26 16:46:18,857 INFO [finetune.py:976] (6/7) Epoch 14, batch 1000, loss[loss=0.2392, simple_loss=0.3095, pruned_loss=0.08452, over 4721.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2547, pruned_loss=0.06051, over 946535.22 frames. ], batch size: 59, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:47:07,213 INFO [finetune.py:976] (6/7) Epoch 14, batch 1050, loss[loss=0.151, simple_loss=0.2322, pruned_loss=0.03489, over 4746.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2562, pruned_loss=0.06088, over 948356.67 frames. ], batch size: 27, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:47:31,085 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.109e+01 1.606e+02 2.003e+02 2.356e+02 8.983e+02, threshold=4.007e+02, percent-clipped=2.0 2023-03-26 16:47:44,015 INFO [finetune.py:976] (6/7) Epoch 14, batch 1100, loss[loss=0.2387, simple_loss=0.3046, pruned_loss=0.08639, over 4901.00 frames. ], tot_loss[loss=0.1909, simple_loss=0.2588, pruned_loss=0.06155, over 950787.17 frames. ], batch size: 43, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:47:47,090 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75565.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:00,099 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75584.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:18,069 INFO [finetune.py:976] (6/7) Epoch 14, batch 1150, loss[loss=0.1845, simple_loss=0.2551, pruned_loss=0.05697, over 4917.00 frames. ], tot_loss[loss=0.1924, simple_loss=0.2601, pruned_loss=0.06238, over 953038.62 frames. ], batch size: 33, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:48:19,808 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:24,055 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75620.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:28,193 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=75626.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 16:48:38,760 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.806e+01 1.586e+02 1.986e+02 2.337e+02 5.787e+02, threshold=3.972e+02, percent-clipped=2.0 2023-03-26 16:48:51,185 INFO [finetune.py:976] (6/7) Epoch 14, batch 1200, loss[loss=0.1659, simple_loss=0.2328, pruned_loss=0.04948, over 4798.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2577, pruned_loss=0.06161, over 952661.46 frames. ], batch size: 25, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:48:56,225 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 16:48:56,400 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75668.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:48:58,291 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2195, 1.9963, 2.2791, 1.6717, 2.1990, 2.5481, 2.3999, 1.5381], device='cuda:6'), covar=tensor([0.0665, 0.0736, 0.0696, 0.0885, 0.0654, 0.0616, 0.0583, 0.1676], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0132, 0.0142, 0.0123, 0.0124, 0.0141, 0.0141, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:49:24,701 INFO [finetune.py:976] (6/7) Epoch 14, batch 1250, loss[loss=0.169, simple_loss=0.2344, pruned_loss=0.05184, over 4935.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2554, pruned_loss=0.06086, over 953704.17 frames. ], batch size: 38, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:49:45,240 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.316e+01 1.497e+02 1.829e+02 2.293e+02 4.240e+02, threshold=3.659e+02, percent-clipped=2.0 2023-03-26 16:49:57,807 INFO [finetune.py:976] (6/7) Epoch 14, batch 1300, loss[loss=0.1461, simple_loss=0.2193, pruned_loss=0.03649, over 4893.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2522, pruned_loss=0.05954, over 955760.22 frames. ], batch size: 32, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:50:17,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3610, 2.3806, 2.2581, 1.7943, 2.1513, 2.7048, 2.5061, 2.0711], device='cuda:6'), covar=tensor([0.0560, 0.0536, 0.0737, 0.0873, 0.1279, 0.0579, 0.0558, 0.0968], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0132, 0.0141, 0.0123, 0.0123, 0.0141, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:50:23,258 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8225, 1.4639, 2.2244, 3.4366, 2.3276, 2.3713, 1.0065, 2.7775], device='cuda:6'), covar=tensor([0.1682, 0.1503, 0.1364, 0.0525, 0.0791, 0.1866, 0.1994, 0.0502], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0101, 0.0139, 0.0127, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 16:50:31,735 INFO [finetune.py:976] (6/7) Epoch 14, batch 1350, loss[loss=0.147, simple_loss=0.2161, pruned_loss=0.03891, over 4767.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.254, pruned_loss=0.06114, over 952687.71 frames. ], batch size: 26, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:51:07,731 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.659e+01 1.636e+02 1.953e+02 2.256e+02 6.748e+02, threshold=3.906e+02, percent-clipped=1.0 2023-03-26 16:51:19,718 INFO [finetune.py:976] (6/7) Epoch 14, batch 1400, loss[loss=0.1649, simple_loss=0.2317, pruned_loss=0.04911, over 4809.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2556, pruned_loss=0.06076, over 954703.19 frames. ], batch size: 25, lr: 3.56e-03, grad_scale: 16.0 2023-03-26 16:51:35,792 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75884.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:51:53,578 INFO [finetune.py:976] (6/7) Epoch 14, batch 1450, loss[loss=0.1726, simple_loss=0.251, pruned_loss=0.04705, over 4727.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2573, pruned_loss=0.0616, over 952740.40 frames. ], batch size: 54, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:52:08,082 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=75926.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:52:17,404 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75932.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:52:27,769 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.664e+02 1.939e+02 2.609e+02 1.085e+03, threshold=3.877e+02, percent-clipped=5.0 2023-03-26 16:52:27,931 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2760, 2.0212, 1.9736, 2.1560, 3.1331, 2.1672, 2.4369, 1.7097], device='cuda:6'), covar=tensor([0.2315, 0.2200, 0.1994, 0.1875, 0.1469, 0.1220, 0.1871, 0.2063], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0207, 0.0210, 0.0190, 0.0241, 0.0184, 0.0214, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:52:31,945 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6282, 2.2717, 1.7674, 0.8302, 2.0758, 2.0104, 1.7525, 2.0702], device='cuda:6'), covar=tensor([0.0840, 0.0948, 0.1574, 0.2109, 0.1422, 0.2325, 0.2314, 0.0921], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0195, 0.0198, 0.0183, 0.0211, 0.0207, 0.0220, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:52:44,458 INFO [finetune.py:976] (6/7) Epoch 14, batch 1500, loss[loss=0.18, simple_loss=0.2583, pruned_loss=0.05088, over 3982.00 frames. ], tot_loss[loss=0.191, simple_loss=0.258, pruned_loss=0.06199, over 951503.30 frames. ], batch size: 17, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:52:53,472 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=75974.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:53:19,430 INFO [finetune.py:976] (6/7) Epoch 14, batch 1550, loss[loss=0.196, simple_loss=0.2458, pruned_loss=0.0731, over 4313.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2581, pruned_loss=0.06146, over 952748.81 frames. ], batch size: 19, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:53:24,743 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-26 16:53:40,207 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.569e+01 1.489e+02 1.761e+02 2.263e+02 3.823e+02, threshold=3.522e+02, percent-clipped=0.0 2023-03-26 16:53:53,247 INFO [finetune.py:976] (6/7) Epoch 14, batch 1600, loss[loss=0.1668, simple_loss=0.241, pruned_loss=0.04634, over 4818.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2563, pruned_loss=0.06102, over 953389.50 frames. ], batch size: 40, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:54:26,637 INFO [finetune.py:976] (6/7) Epoch 14, batch 1650, loss[loss=0.2013, simple_loss=0.2746, pruned_loss=0.06398, over 4901.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2527, pruned_loss=0.05932, over 955346.67 frames. ], batch size: 35, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:54:47,811 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.271e+01 1.580e+02 1.872e+02 2.187e+02 4.946e+02, threshold=3.744e+02, percent-clipped=3.0 2023-03-26 16:55:00,263 INFO [finetune.py:976] (6/7) Epoch 14, batch 1700, loss[loss=0.2053, simple_loss=0.2625, pruned_loss=0.07404, over 4799.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2506, pruned_loss=0.05875, over 955738.01 frames. ], batch size: 41, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:55:34,231 INFO [finetune.py:976] (6/7) Epoch 14, batch 1750, loss[loss=0.1939, simple_loss=0.2479, pruned_loss=0.06995, over 4697.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.252, pruned_loss=0.05891, over 955167.99 frames. ], batch size: 23, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:55:55,254 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.620e+02 1.973e+02 2.349e+02 4.562e+02, threshold=3.945e+02, percent-clipped=4.0 2023-03-26 16:56:17,742 INFO [finetune.py:976] (6/7) Epoch 14, batch 1800, loss[loss=0.1961, simple_loss=0.2667, pruned_loss=0.06275, over 4862.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2564, pruned_loss=0.06009, over 956160.06 frames. ], batch size: 44, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:56:27,644 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76269.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:56:47,389 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76294.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:56:58,601 INFO [finetune.py:976] (6/7) Epoch 14, batch 1850, loss[loss=0.2013, simple_loss=0.2652, pruned_loss=0.06871, over 4896.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2573, pruned_loss=0.06047, over 957843.55 frames. ], batch size: 43, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:57:07,116 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76323.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 16:57:07,812 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.59 vs. limit=2.0 2023-03-26 16:57:11,396 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76330.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 16:57:19,100 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.514e+02 1.907e+02 2.299e+02 3.483e+02, threshold=3.815e+02, percent-clipped=0.0 2023-03-26 16:57:20,123 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 16:57:35,315 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 16:57:38,872 INFO [finetune.py:976] (6/7) Epoch 14, batch 1900, loss[loss=0.1691, simple_loss=0.2528, pruned_loss=0.04269, over 4774.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2588, pruned_loss=0.06036, over 955851.22 frames. ], batch size: 27, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:57:57,155 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76384.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 16:58:15,532 INFO [finetune.py:976] (6/7) Epoch 14, batch 1950, loss[loss=0.1632, simple_loss=0.2461, pruned_loss=0.04012, over 4792.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2569, pruned_loss=0.0598, over 954811.74 frames. ], batch size: 26, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:58:35,771 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.938e+01 1.459e+02 1.786e+02 2.082e+02 3.715e+02, threshold=3.572e+02, percent-clipped=0.0 2023-03-26 16:58:49,141 INFO [finetune.py:976] (6/7) Epoch 14, batch 2000, loss[loss=0.1787, simple_loss=0.2468, pruned_loss=0.05532, over 4908.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2549, pruned_loss=0.05969, over 956618.84 frames. ], batch size: 43, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:58:49,234 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2839, 3.7224, 3.9368, 4.0879, 4.0535, 3.8075, 4.4006, 1.3687], device='cuda:6'), covar=tensor([0.0866, 0.0928, 0.0838, 0.1018, 0.1282, 0.1466, 0.0760, 0.5650], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0244, 0.0277, 0.0292, 0.0332, 0.0281, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:58:53,189 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.23 vs. limit=5.0 2023-03-26 16:58:54,824 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.43 vs. limit=5.0 2023-03-26 16:59:22,662 INFO [finetune.py:976] (6/7) Epoch 14, batch 2050, loss[loss=0.1758, simple_loss=0.2453, pruned_loss=0.05319, over 4914.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2509, pruned_loss=0.05836, over 956508.30 frames. ], batch size: 36, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 16:59:28,184 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5576, 3.9145, 4.1607, 4.3950, 4.3134, 4.1054, 4.6730, 1.4789], device='cuda:6'), covar=tensor([0.0804, 0.0874, 0.0914, 0.0963, 0.1205, 0.1434, 0.0642, 0.5439], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0243, 0.0276, 0.0290, 0.0330, 0.0281, 0.0300, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:59:42,964 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.833e+01 1.460e+02 1.798e+02 2.157e+02 5.136e+02, threshold=3.595e+02, percent-clipped=3.0 2023-03-26 16:59:44,877 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2853, 1.2634, 1.3370, 0.8252, 1.3538, 1.3898, 1.3473, 1.2048], device='cuda:6'), covar=tensor([0.0605, 0.0730, 0.0643, 0.0929, 0.1133, 0.0621, 0.0627, 0.1061], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0142, 0.0123, 0.0124, 0.0141, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 16:59:56,036 INFO [finetune.py:976] (6/7) Epoch 14, batch 2100, loss[loss=0.2302, simple_loss=0.2926, pruned_loss=0.08394, over 4854.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2508, pruned_loss=0.05869, over 956970.09 frames. ], batch size: 44, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:00:29,508 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-26 17:00:29,580 INFO [finetune.py:976] (6/7) Epoch 14, batch 2150, loss[loss=0.1958, simple_loss=0.2683, pruned_loss=0.06163, over 4726.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.254, pruned_loss=0.05961, over 956660.21 frames. ], batch size: 59, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:00:38,761 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76625.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 17:00:50,407 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.680e+02 1.855e+02 2.274e+02 3.771e+02, threshold=3.710e+02, percent-clipped=2.0 2023-03-26 17:00:55,368 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:00:57,274 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0090, 1.4157, 0.9253, 1.8163, 2.2470, 1.5485, 1.7266, 1.6825], device='cuda:6'), covar=tensor([0.1388, 0.1995, 0.2077, 0.1187, 0.1882, 0.1968, 0.1451, 0.1997], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0113, 0.0093, 0.0120, 0.0094, 0.0100, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 17:01:02,484 INFO [finetune.py:976] (6/7) Epoch 14, batch 2200, loss[loss=0.2398, simple_loss=0.3, pruned_loss=0.08976, over 4856.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2563, pruned_loss=0.0606, over 955783.68 frames. ], batch size: 44, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:01:21,651 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=76679.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:01:25,757 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2004, 1.9935, 2.7059, 1.6468, 2.4329, 2.6345, 1.8805, 2.8301], device='cuda:6'), covar=tensor([0.1695, 0.2330, 0.1791, 0.2627, 0.1001, 0.1654, 0.2859, 0.0957], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0206, 0.0193, 0.0192, 0.0178, 0.0216, 0.0219, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:01:55,183 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1269, 1.6769, 1.9400, 0.8098, 2.1155, 2.4113, 1.8686, 1.8066], device='cuda:6'), covar=tensor([0.1172, 0.1339, 0.0601, 0.0894, 0.0704, 0.0612, 0.0702, 0.0897], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0121, 0.0129, 0.0130, 0.0126, 0.0141, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.2774e-05, 1.1128e-04, 8.7103e-05, 9.2769e-05, 9.1974e-05, 9.1207e-05, 1.0244e-04, 1.0511e-04], device='cuda:6') 2023-03-26 17:01:57,531 INFO [finetune.py:976] (6/7) Epoch 14, batch 2250, loss[loss=0.1954, simple_loss=0.2608, pruned_loss=0.065, over 4844.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2576, pruned_loss=0.06031, over 957580.31 frames. ], batch size: 49, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:02:02,615 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0414, 1.9894, 1.5744, 2.0436, 2.0490, 1.7768, 2.3209, 2.0855], device='cuda:6'), covar=tensor([0.1393, 0.2084, 0.2984, 0.2438, 0.2487, 0.1607, 0.3104, 0.1692], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0188, 0.0234, 0.0254, 0.0245, 0.0199, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:02:18,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.580e+02 1.848e+02 2.151e+02 3.368e+02, threshold=3.695e+02, percent-clipped=0.0 2023-03-26 17:02:31,272 INFO [finetune.py:976] (6/7) Epoch 14, batch 2300, loss[loss=0.1796, simple_loss=0.2437, pruned_loss=0.05776, over 4918.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2575, pruned_loss=0.06018, over 956746.26 frames. ], batch size: 33, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:02:42,060 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 17:02:43,765 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6270, 3.7445, 3.4614, 1.8805, 3.7885, 2.7923, 0.9226, 2.5400], device='cuda:6'), covar=tensor([0.2448, 0.2153, 0.1536, 0.3241, 0.1034, 0.1099, 0.4296, 0.1490], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0172, 0.0159, 0.0127, 0.0155, 0.0122, 0.0144, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 17:03:06,757 INFO [finetune.py:976] (6/7) Epoch 14, batch 2350, loss[loss=0.161, simple_loss=0.234, pruned_loss=0.04401, over 4928.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2564, pruned_loss=0.0601, over 957560.09 frames. ], batch size: 33, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:03:25,948 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=76839.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:03:28,174 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.531e+02 1.863e+02 2.217e+02 4.521e+02, threshold=3.725e+02, percent-clipped=2.0 2023-03-26 17:03:40,659 INFO [finetune.py:976] (6/7) Epoch 14, batch 2400, loss[loss=0.1729, simple_loss=0.2407, pruned_loss=0.05251, over 4908.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2533, pruned_loss=0.059, over 956746.62 frames. ], batch size: 43, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:03:41,393 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8832, 1.7872, 1.5459, 2.0222, 2.4415, 2.0197, 1.6812, 1.4672], device='cuda:6'), covar=tensor([0.2380, 0.2195, 0.2118, 0.1787, 0.1799, 0.1255, 0.2453, 0.2222], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0207, 0.0211, 0.0190, 0.0240, 0.0184, 0.0214, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:03:56,247 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3327, 3.6951, 3.9098, 4.1965, 4.1013, 3.7978, 4.4010, 1.5210], device='cuda:6'), covar=tensor([0.0734, 0.0859, 0.0803, 0.0886, 0.1161, 0.1486, 0.0658, 0.5037], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0243, 0.0276, 0.0292, 0.0330, 0.0281, 0.0302, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:04:06,920 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=76900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:04:09,899 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8400, 1.7991, 1.8821, 1.1689, 1.9121, 1.8819, 1.8563, 1.5951], device='cuda:6'), covar=tensor([0.0584, 0.0607, 0.0620, 0.0872, 0.0659, 0.0728, 0.0615, 0.1063], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0132, 0.0141, 0.0122, 0.0123, 0.0141, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:04:14,025 INFO [finetune.py:976] (6/7) Epoch 14, batch 2450, loss[loss=0.1712, simple_loss=0.2367, pruned_loss=0.05284, over 4798.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2491, pruned_loss=0.05737, over 957489.99 frames. ], batch size: 51, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:04:23,086 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76925.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:04:34,663 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.650e+01 1.627e+02 1.914e+02 2.448e+02 4.488e+02, threshold=3.829e+02, percent-clipped=3.0 2023-03-26 17:04:40,098 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:04:47,674 INFO [finetune.py:976] (6/7) Epoch 14, batch 2500, loss[loss=0.1708, simple_loss=0.2498, pruned_loss=0.04589, over 4826.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2526, pruned_loss=0.06016, over 956339.60 frames. ], batch size: 49, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:04:55,526 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=76973.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:04:59,667 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=76979.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:05:05,945 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 17:05:12,270 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=76998.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:05:21,626 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 17:05:21,744 INFO [finetune.py:976] (6/7) Epoch 14, batch 2550, loss[loss=0.1954, simple_loss=0.2595, pruned_loss=0.06567, over 4928.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2553, pruned_loss=0.06106, over 953515.08 frames. ], batch size: 33, lr: 3.55e-03, grad_scale: 16.0 2023-03-26 17:05:32,009 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77027.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:05:42,424 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.599e+02 1.863e+02 2.531e+02 5.028e+02, threshold=3.725e+02, percent-clipped=2.0 2023-03-26 17:05:43,159 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9375, 1.8699, 1.7287, 1.8685, 1.2565, 3.8803, 1.5676, 2.0552], device='cuda:6'), covar=tensor([0.3145, 0.2349, 0.1985, 0.2261, 0.1686, 0.0189, 0.2371, 0.1157], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 17:05:43,769 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2088, 1.5484, 1.6991, 0.6576, 1.9852, 2.0957, 1.7999, 1.7128], device='cuda:6'), covar=tensor([0.1007, 0.1153, 0.0593, 0.0771, 0.0500, 0.0713, 0.0569, 0.0763], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0154, 0.0122, 0.0130, 0.0131, 0.0128, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3893e-05, 1.1209e-04, 8.7921e-05, 9.3593e-05, 9.2919e-05, 9.2523e-05, 1.0312e-04, 1.0582e-04], device='cuda:6') 2023-03-26 17:05:55,387 INFO [finetune.py:976] (6/7) Epoch 14, batch 2600, loss[loss=0.1389, simple_loss=0.2116, pruned_loss=0.03314, over 4772.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2563, pruned_loss=0.06111, over 952093.27 frames. ], batch size: 26, lr: 3.55e-03, grad_scale: 32.0 2023-03-26 17:06:15,297 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 17:06:18,043 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77095.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:06:27,072 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0057, 1.6177, 2.3568, 1.5947, 2.1973, 2.2682, 1.6014, 2.4207], device='cuda:6'), covar=tensor([0.1251, 0.1945, 0.1391, 0.1980, 0.0827, 0.1318, 0.2645, 0.0803], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0206, 0.0193, 0.0190, 0.0178, 0.0214, 0.0217, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:06:30,489 INFO [finetune.py:976] (6/7) Epoch 14, batch 2650, loss[loss=0.1629, simple_loss=0.2277, pruned_loss=0.04899, over 4763.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2573, pruned_loss=0.06122, over 952185.77 frames. ], batch size: 27, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:07:08,840 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.161e+02 1.606e+02 1.948e+02 2.371e+02 3.624e+02, threshold=3.895e+02, percent-clipped=0.0 2023-03-26 17:07:22,920 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77156.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 17:07:26,329 INFO [finetune.py:976] (6/7) Epoch 14, batch 2700, loss[loss=0.1518, simple_loss=0.2265, pruned_loss=0.0385, over 4932.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2572, pruned_loss=0.06112, over 951563.06 frames. ], batch size: 33, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:07:41,725 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 17:07:57,709 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77195.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:04,074 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.41 vs. limit=5.0 2023-03-26 17:08:07,956 INFO [finetune.py:976] (6/7) Epoch 14, batch 2750, loss[loss=0.213, simple_loss=0.267, pruned_loss=0.07951, over 4858.00 frames. ], tot_loss[loss=0.187, simple_loss=0.254, pruned_loss=0.05997, over 950998.14 frames. ], batch size: 49, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:08:08,073 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77211.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:08,705 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2784, 2.0823, 2.0499, 2.3784, 2.8292, 2.4431, 2.1177, 1.9723], device='cuda:6'), covar=tensor([0.2105, 0.2003, 0.1804, 0.1631, 0.1523, 0.1035, 0.2060, 0.1971], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0207, 0.0211, 0.0191, 0.0241, 0.0184, 0.0214, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:08:15,191 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3712, 2.3360, 1.9524, 0.9691, 2.1301, 1.9019, 1.7328, 2.0354], device='cuda:6'), covar=tensor([0.1036, 0.0648, 0.1425, 0.1899, 0.1398, 0.1991, 0.2059, 0.0968], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0197, 0.0201, 0.0186, 0.0216, 0.0210, 0.0225, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:08:18,248 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.4555, 3.0596, 2.7708, 1.3535, 2.9596, 2.4737, 2.2637, 2.6073], device='cuda:6'), covar=tensor([0.0874, 0.0845, 0.1647, 0.2195, 0.1608, 0.1892, 0.1985, 0.1135], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0198, 0.0201, 0.0186, 0.0216, 0.0210, 0.0225, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:08:28,396 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.514e+02 1.871e+02 2.163e+02 3.576e+02, threshold=3.742e+02, percent-clipped=0.0 2023-03-26 17:08:40,954 INFO [finetune.py:976] (6/7) Epoch 14, batch 2800, loss[loss=0.1522, simple_loss=0.2253, pruned_loss=0.03958, over 4701.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2516, pruned_loss=0.0595, over 950554.32 frames. ], batch size: 23, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:08:45,240 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2939, 2.1640, 1.6920, 2.1473, 2.1720, 1.9358, 2.4925, 2.2098], device='cuda:6'), covar=tensor([0.1459, 0.2271, 0.3316, 0.2893, 0.2873, 0.1790, 0.3489, 0.2007], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0188, 0.0233, 0.0253, 0.0244, 0.0199, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:08:48,266 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:48,854 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:08:52,191 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77277.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:09:14,627 INFO [finetune.py:976] (6/7) Epoch 14, batch 2850, loss[loss=0.249, simple_loss=0.3027, pruned_loss=0.09764, over 4811.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2511, pruned_loss=0.05892, over 951787.63 frames. ], batch size: 51, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:09:14,982 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.34 vs. limit=5.0 2023-03-26 17:09:20,316 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 17:09:29,752 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77334.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:09:32,167 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77338.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:09:34,913 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.563e+02 1.884e+02 2.320e+02 5.201e+02, threshold=3.768e+02, percent-clipped=2.0 2023-03-26 17:09:42,277 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 17:09:47,973 INFO [finetune.py:976] (6/7) Epoch 14, batch 2900, loss[loss=0.2698, simple_loss=0.3236, pruned_loss=0.108, over 4184.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.255, pruned_loss=0.05996, over 951350.64 frames. ], batch size: 65, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:10:21,779 INFO [finetune.py:976] (6/7) Epoch 14, batch 2950, loss[loss=0.1904, simple_loss=0.2587, pruned_loss=0.06106, over 4822.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2578, pruned_loss=0.06094, over 953589.06 frames. ], batch size: 33, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:10:41,985 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.681e+01 1.651e+02 1.924e+02 2.203e+02 4.754e+02, threshold=3.848e+02, percent-clipped=2.0 2023-03-26 17:10:48,024 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77451.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:10:54,978 INFO [finetune.py:976] (6/7) Epoch 14, batch 3000, loss[loss=0.227, simple_loss=0.2875, pruned_loss=0.08324, over 4896.00 frames. ], tot_loss[loss=0.1933, simple_loss=0.261, pruned_loss=0.06279, over 953946.44 frames. ], batch size: 43, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:10:54,978 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 17:11:00,776 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8952, 3.4542, 3.5619, 3.7870, 3.6206, 3.5155, 3.9660, 1.3236], device='cuda:6'), covar=tensor([0.0980, 0.1097, 0.0957, 0.1125, 0.1686, 0.1790, 0.0873, 0.5251], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0243, 0.0276, 0.0292, 0.0332, 0.0283, 0.0299, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:11:02,025 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1190, 1.9796, 1.7315, 1.8365, 2.1504, 1.8074, 2.3014, 2.1177], device='cuda:6'), covar=tensor([0.1433, 0.2427, 0.3355, 0.2625, 0.2678, 0.1809, 0.3456, 0.1951], device='cuda:6'), in_proj_covar=tensor([0.0179, 0.0188, 0.0232, 0.0252, 0.0243, 0.0198, 0.0212, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:11:03,134 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9054, 1.1258, 1.9291, 1.8475, 1.7228, 1.5968, 1.6929, 1.7674], device='cuda:6'), covar=tensor([0.4138, 0.4561, 0.3816, 0.4050, 0.5128, 0.3817, 0.4942, 0.3371], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0237, 0.0256, 0.0265, 0.0263, 0.0236, 0.0276, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:11:09,361 INFO [finetune.py:1010] (6/7) Epoch 14, validation: loss=0.1563, simple_loss=0.2268, pruned_loss=0.04293, over 2265189.00 frames. 2023-03-26 17:11:09,361 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 17:11:34,072 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:11:44,251 INFO [finetune.py:976] (6/7) Epoch 14, batch 3050, loss[loss=0.1625, simple_loss=0.2362, pruned_loss=0.04444, over 4728.00 frames. ], tot_loss[loss=0.1935, simple_loss=0.2612, pruned_loss=0.06292, over 954459.25 frames. ], batch size: 59, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:12:13,224 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.567e+02 1.800e+02 2.244e+02 5.193e+02, threshold=3.600e+02, percent-clipped=2.0 2023-03-26 17:12:13,931 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77543.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:12:21,785 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8013, 1.6996, 1.5358, 1.8426, 2.1775, 1.9528, 1.4402, 1.4883], device='cuda:6'), covar=tensor([0.2428, 0.2052, 0.2101, 0.1889, 0.1762, 0.1200, 0.2529, 0.2158], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0207, 0.0210, 0.0190, 0.0240, 0.0184, 0.0214, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:12:35,684 INFO [finetune.py:976] (6/7) Epoch 14, batch 3100, loss[loss=0.1593, simple_loss=0.2258, pruned_loss=0.04637, over 4816.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2575, pruned_loss=0.06089, over 956108.15 frames. ], batch size: 41, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:12:39,912 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77567.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:22,407 INFO [finetune.py:976] (6/7) Epoch 14, batch 3150, loss[loss=0.1715, simple_loss=0.2344, pruned_loss=0.05428, over 4940.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2554, pruned_loss=0.06063, over 955974.46 frames. ], batch size: 33, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:13:25,547 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:35,440 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:37,891 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77633.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:13:43,292 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.167e+02 1.643e+02 1.968e+02 2.398e+02 4.679e+02, threshold=3.936e+02, percent-clipped=3.0 2023-03-26 17:13:56,367 INFO [finetune.py:976] (6/7) Epoch 14, batch 3200, loss[loss=0.1551, simple_loss=0.2258, pruned_loss=0.04224, over 4787.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2524, pruned_loss=0.05957, over 955889.05 frames. ], batch size: 26, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:14:07,182 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=77677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:14:07,191 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77677.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:14:29,512 INFO [finetune.py:976] (6/7) Epoch 14, batch 3250, loss[loss=0.1971, simple_loss=0.2714, pruned_loss=0.06137, over 4828.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2522, pruned_loss=0.05963, over 954451.07 frames. ], batch size: 30, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:14:47,333 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=77738.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:14:49,601 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.623e+02 1.885e+02 2.287e+02 7.301e+02, threshold=3.769e+02, percent-clipped=4.0 2023-03-26 17:14:55,593 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77751.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:15:02,066 INFO [finetune.py:976] (6/7) Epoch 14, batch 3300, loss[loss=0.1803, simple_loss=0.2418, pruned_loss=0.05943, over 4732.00 frames. ], tot_loss[loss=0.1894, simple_loss=0.2561, pruned_loss=0.06129, over 955241.20 frames. ], batch size: 23, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:15:27,674 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77799.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:15:35,622 INFO [finetune.py:976] (6/7) Epoch 14, batch 3350, loss[loss=0.2244, simple_loss=0.2875, pruned_loss=0.08066, over 4888.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2575, pruned_loss=0.06094, over 955139.78 frames. ], batch size: 35, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:15:36,787 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1915, 2.9914, 2.8410, 1.1936, 2.9989, 2.1908, 0.7298, 1.8068], device='cuda:6'), covar=tensor([0.2447, 0.2214, 0.1677, 0.3454, 0.1351, 0.1189, 0.3877, 0.1605], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0172, 0.0159, 0.0127, 0.0156, 0.0121, 0.0144, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 17:15:57,266 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.683e+02 1.958e+02 2.277e+02 5.309e+02, threshold=3.915e+02, percent-clipped=1.0 2023-03-26 17:15:57,378 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3236, 1.5174, 0.8879, 1.9627, 2.3711, 1.6482, 1.7934, 1.9578], device='cuda:6'), covar=tensor([0.1223, 0.1993, 0.2115, 0.1145, 0.1858, 0.1913, 0.1452, 0.1812], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0094, 0.0111, 0.0091, 0.0118, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:16:09,331 INFO [finetune.py:976] (6/7) Epoch 14, batch 3400, loss[loss=0.1921, simple_loss=0.2788, pruned_loss=0.05271, over 4821.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2582, pruned_loss=0.06083, over 956247.43 frames. ], batch size: 47, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:16:18,560 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77867.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:16:51,417 INFO [finetune.py:976] (6/7) Epoch 14, batch 3450, loss[loss=0.172, simple_loss=0.2376, pruned_loss=0.05318, over 4824.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2586, pruned_loss=0.06119, over 956119.44 frames. ], batch size: 47, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:16:53,841 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77915.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:16:56,189 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5535, 1.3099, 2.0085, 3.2168, 2.1363, 2.1268, 0.9639, 2.6055], device='cuda:6'), covar=tensor([0.1757, 0.1547, 0.1268, 0.0533, 0.0810, 0.1381, 0.1889, 0.0533], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:17:03,248 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77929.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:05,712 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=77933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:11,407 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.160e+01 1.485e+02 1.825e+02 2.093e+02 4.049e+02, threshold=3.650e+02, percent-clipped=1.0 2023-03-26 17:17:33,223 INFO [finetune.py:976] (6/7) Epoch 14, batch 3500, loss[loss=0.197, simple_loss=0.2671, pruned_loss=0.06346, over 4228.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2557, pruned_loss=0.06052, over 955972.78 frames. ], batch size: 65, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:17:40,921 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=77972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:44,503 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77977.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:17:46,958 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=77981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:18:20,969 INFO [finetune.py:976] (6/7) Epoch 14, batch 3550, loss[loss=0.2046, simple_loss=0.2642, pruned_loss=0.07249, over 4820.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2542, pruned_loss=0.06017, over 956572.54 frames. ], batch size: 38, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:18:29,128 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 17:18:35,879 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78033.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:18:41,166 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.505e+02 1.960e+02 2.472e+02 4.194e+02, threshold=3.920e+02, percent-clipped=4.0 2023-03-26 17:18:54,340 INFO [finetune.py:976] (6/7) Epoch 14, batch 3600, loss[loss=0.1882, simple_loss=0.2491, pruned_loss=0.06365, over 4856.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2512, pruned_loss=0.05904, over 956420.53 frames. ], batch size: 44, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:19:23,659 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2536, 1.1938, 1.1107, 1.2471, 1.5557, 1.3919, 1.2683, 1.1334], device='cuda:6'), covar=tensor([0.0345, 0.0286, 0.0612, 0.0259, 0.0222, 0.0458, 0.0347, 0.0386], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0111, 0.0144, 0.0116, 0.0102, 0.0109, 0.0098, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.4001e-05, 8.6140e-05, 1.1375e-04, 8.9778e-05, 7.9920e-05, 8.0683e-05, 7.3857e-05, 8.4531e-05], device='cuda:6') 2023-03-26 17:19:28,413 INFO [finetune.py:976] (6/7) Epoch 14, batch 3650, loss[loss=0.2344, simple_loss=0.3103, pruned_loss=0.07919, over 4829.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.254, pruned_loss=0.06018, over 954262.37 frames. ], batch size: 40, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:19:39,398 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1930, 2.0484, 2.1418, 0.9528, 2.3751, 2.6428, 2.2279, 2.0187], device='cuda:6'), covar=tensor([0.1004, 0.0738, 0.0512, 0.0763, 0.0682, 0.0682, 0.0691, 0.0743], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0154, 0.0123, 0.0131, 0.0131, 0.0128, 0.0144, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.4084e-05, 1.1253e-04, 8.8519e-05, 9.4141e-05, 9.2882e-05, 9.2597e-05, 1.0411e-04, 1.0645e-04], device='cuda:6') 2023-03-26 17:19:48,729 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.711e+02 2.033e+02 2.406e+02 8.151e+02, threshold=4.067e+02, percent-clipped=4.0 2023-03-26 17:19:48,882 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3060, 2.0119, 1.4896, 0.6242, 1.7620, 1.9945, 1.7949, 1.8419], device='cuda:6'), covar=tensor([0.0831, 0.0752, 0.1398, 0.1883, 0.1212, 0.1943, 0.1929, 0.0842], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0194, 0.0198, 0.0183, 0.0212, 0.0206, 0.0221, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:20:02,236 INFO [finetune.py:976] (6/7) Epoch 14, batch 3700, loss[loss=0.2419, simple_loss=0.3118, pruned_loss=0.08604, over 4918.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2574, pruned_loss=0.06111, over 955017.99 frames. ], batch size: 42, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:20:06,243 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 17:20:35,983 INFO [finetune.py:976] (6/7) Epoch 14, batch 3750, loss[loss=0.2026, simple_loss=0.2686, pruned_loss=0.06832, over 4893.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2583, pruned_loss=0.06095, over 953963.29 frames. ], batch size: 37, lr: 3.54e-03, grad_scale: 32.0 2023-03-26 17:20:36,754 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3078, 2.2108, 1.6490, 0.7833, 1.8795, 1.8071, 1.6979, 1.8744], device='cuda:6'), covar=tensor([0.0918, 0.0656, 0.1450, 0.2061, 0.1323, 0.2069, 0.2117, 0.0987], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0194, 0.0198, 0.0183, 0.0213, 0.0207, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:20:49,444 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78232.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:20:55,810 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.186e+02 1.631e+02 1.949e+02 2.239e+02 4.423e+02, threshold=3.899e+02, percent-clipped=1.0 2023-03-26 17:21:03,660 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2113, 1.3386, 1.3438, 0.7304, 1.2717, 1.5039, 1.6098, 1.2645], device='cuda:6'), covar=tensor([0.0848, 0.0510, 0.0443, 0.0454, 0.0444, 0.0482, 0.0285, 0.0574], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0153, 0.0122, 0.0130, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3327e-05, 1.1147e-04, 8.8143e-05, 9.3441e-05, 9.2193e-05, 9.2054e-05, 1.0324e-04, 1.0562e-04], device='cuda:6') 2023-03-26 17:21:08,219 INFO [finetune.py:976] (6/7) Epoch 14, batch 3800, loss[loss=0.2153, simple_loss=0.2866, pruned_loss=0.07197, over 4823.00 frames. ], tot_loss[loss=0.1914, simple_loss=0.2597, pruned_loss=0.0616, over 952014.35 frames. ], batch size: 33, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:21:15,918 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:21:21,429 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8668, 1.6843, 1.5277, 1.8754, 2.2713, 1.9083, 1.5519, 1.4767], device='cuda:6'), covar=tensor([0.2005, 0.1930, 0.1766, 0.1489, 0.1602, 0.1114, 0.2313, 0.1786], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0209, 0.0211, 0.0192, 0.0243, 0.0185, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:21:31,059 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78293.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:21:49,337 INFO [finetune.py:976] (6/7) Epoch 14, batch 3850, loss[loss=0.2162, simple_loss=0.2735, pruned_loss=0.07943, over 4823.00 frames. ], tot_loss[loss=0.1909, simple_loss=0.2591, pruned_loss=0.06135, over 952674.25 frames. ], batch size: 30, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:21:49,472 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3601, 1.2778, 1.2059, 1.3761, 1.6098, 1.5168, 1.3673, 1.1435], device='cuda:6'), covar=tensor([0.0279, 0.0255, 0.0566, 0.0266, 0.0205, 0.0400, 0.0288, 0.0359], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0110, 0.0143, 0.0115, 0.0101, 0.0108, 0.0097, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.3371e-05, 8.5419e-05, 1.1322e-04, 8.9156e-05, 7.9156e-05, 7.9714e-05, 7.3300e-05, 8.3816e-05], device='cuda:6') 2023-03-26 17:21:55,868 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=78320.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:22:03,812 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:22:10,189 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.518e+02 1.784e+02 2.158e+02 4.566e+02, threshold=3.568e+02, percent-clipped=2.0 2023-03-26 17:22:22,702 INFO [finetune.py:976] (6/7) Epoch 14, batch 3900, loss[loss=0.1756, simple_loss=0.2372, pruned_loss=0.05705, over 4827.00 frames. ], tot_loss[loss=0.189, simple_loss=0.2569, pruned_loss=0.06058, over 953604.64 frames. ], batch size: 25, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:22:45,627 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=78381.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:23:02,729 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1903, 1.9720, 1.7458, 1.8605, 1.9022, 1.9137, 1.9347, 2.6075], device='cuda:6'), covar=tensor([0.4313, 0.4913, 0.3639, 0.4504, 0.4307, 0.2641, 0.4451, 0.2067], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0259, 0.0225, 0.0277, 0.0247, 0.0213, 0.0248, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:23:09,956 INFO [finetune.py:976] (6/7) Epoch 14, batch 3950, loss[loss=0.2123, simple_loss=0.2684, pruned_loss=0.07812, over 4766.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2529, pruned_loss=0.05918, over 954016.30 frames. ], batch size: 28, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:23:37,906 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.587e+02 1.893e+02 2.259e+02 3.905e+02, threshold=3.786e+02, percent-clipped=1.0 2023-03-26 17:23:50,375 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78460.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:23:50,892 INFO [finetune.py:976] (6/7) Epoch 14, batch 4000, loss[loss=0.1536, simple_loss=0.2309, pruned_loss=0.03811, over 4808.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2518, pruned_loss=0.05875, over 956407.65 frames. ], batch size: 38, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:24:16,045 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2434, 1.9557, 2.5723, 4.2986, 3.0284, 2.8508, 1.3785, 3.4710], device='cuda:6'), covar=tensor([0.1629, 0.1452, 0.1402, 0.0471, 0.0682, 0.1340, 0.1728, 0.0490], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0131, 0.0162, 0.0100, 0.0136, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:24:24,846 INFO [finetune.py:976] (6/7) Epoch 14, batch 4050, loss[loss=0.1549, simple_loss=0.2346, pruned_loss=0.03767, over 4859.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2526, pruned_loss=0.05866, over 956730.84 frames. ], batch size: 31, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:24:31,539 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78521.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:24:45,463 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.285e+01 1.581e+02 1.919e+02 2.315e+02 3.488e+02, threshold=3.837e+02, percent-clipped=0.0 2023-03-26 17:24:46,222 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5017, 2.3500, 2.0141, 2.4713, 2.4451, 2.1771, 3.0435, 2.5306], device='cuda:6'), covar=tensor([0.1380, 0.2516, 0.3268, 0.2931, 0.2574, 0.1735, 0.2753, 0.1822], device='cuda:6'), in_proj_covar=tensor([0.0179, 0.0187, 0.0234, 0.0253, 0.0244, 0.0199, 0.0211, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:24:54,883 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78556.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:24:57,773 INFO [finetune.py:976] (6/7) Epoch 14, batch 4100, loss[loss=0.2088, simple_loss=0.2771, pruned_loss=0.07028, over 4850.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2571, pruned_loss=0.06058, over 955203.19 frames. ], batch size: 31, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:25:16,502 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78588.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:25:31,551 INFO [finetune.py:976] (6/7) Epoch 14, batch 4150, loss[loss=0.2018, simple_loss=0.2733, pruned_loss=0.06519, over 4885.00 frames. ], tot_loss[loss=0.1898, simple_loss=0.2578, pruned_loss=0.06088, over 954434.03 frames. ], batch size: 43, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:25:35,320 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=78617.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:25:52,376 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.171e+02 1.587e+02 1.869e+02 2.218e+02 3.242e+02, threshold=3.739e+02, percent-clipped=0.0 2023-03-26 17:26:00,224 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6327, 3.7009, 3.5472, 1.6451, 3.8316, 2.9036, 0.8123, 2.6135], device='cuda:6'), covar=tensor([0.2344, 0.2016, 0.1474, 0.3436, 0.1049, 0.0986, 0.4398, 0.1504], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0173, 0.0160, 0.0127, 0.0157, 0.0122, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 17:26:04,876 INFO [finetune.py:976] (6/7) Epoch 14, batch 4200, loss[loss=0.1864, simple_loss=0.2578, pruned_loss=0.05746, over 4798.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2592, pruned_loss=0.06087, over 954062.53 frames. ], batch size: 40, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:26:05,580 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6732, 1.4990, 2.2326, 3.6171, 2.5278, 2.4373, 1.4746, 2.7979], device='cuda:6'), covar=tensor([0.1776, 0.1550, 0.1286, 0.0485, 0.0762, 0.1331, 0.1683, 0.0600], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0163, 0.0101, 0.0137, 0.0125, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:26:10,959 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3980, 2.2898, 1.9758, 2.3053, 2.2525, 2.1841, 2.2554, 2.9712], device='cuda:6'), covar=tensor([0.3934, 0.4296, 0.3369, 0.4130, 0.3817, 0.2668, 0.3940, 0.1720], device='cuda:6'), in_proj_covar=tensor([0.0283, 0.0258, 0.0224, 0.0275, 0.0246, 0.0213, 0.0247, 0.0223], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:26:37,997 INFO [finetune.py:976] (6/7) Epoch 14, batch 4250, loss[loss=0.1936, simple_loss=0.2667, pruned_loss=0.06021, over 4878.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2565, pruned_loss=0.05962, over 954053.82 frames. ], batch size: 43, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:26:40,459 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4808, 1.3787, 1.2629, 1.5096, 1.5608, 1.5485, 0.9364, 1.3000], device='cuda:6'), covar=tensor([0.2281, 0.2054, 0.1993, 0.1767, 0.1788, 0.1327, 0.2745, 0.1957], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0211, 0.0192, 0.0242, 0.0185, 0.0216, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:27:05,885 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.501e+02 1.722e+02 2.085e+02 5.543e+02, threshold=3.444e+02, percent-clipped=2.0 2023-03-26 17:27:17,143 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9478, 1.3803, 0.7910, 1.7383, 2.2785, 1.4882, 1.7784, 1.8659], device='cuda:6'), covar=tensor([0.1376, 0.1947, 0.2085, 0.1184, 0.1835, 0.1861, 0.1352, 0.1755], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:27:21,260 INFO [finetune.py:976] (6/7) Epoch 14, batch 4300, loss[loss=0.1468, simple_loss=0.2234, pruned_loss=0.03514, over 4831.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2537, pruned_loss=0.05877, over 954901.99 frames. ], batch size: 30, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:27:59,987 INFO [finetune.py:976] (6/7) Epoch 14, batch 4350, loss[loss=0.2063, simple_loss=0.2554, pruned_loss=0.07859, over 4741.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2506, pruned_loss=0.05742, over 955112.96 frames. ], batch size: 54, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:28:06,810 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78816.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:28:34,428 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.179e+02 1.620e+02 1.919e+02 2.276e+02 4.946e+02, threshold=3.838e+02, percent-clipped=3.0 2023-03-26 17:28:54,136 INFO [finetune.py:976] (6/7) Epoch 14, batch 4400, loss[loss=0.2455, simple_loss=0.3041, pruned_loss=0.09348, over 4905.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.252, pruned_loss=0.05872, over 953834.37 frames. ], batch size: 43, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:29:02,552 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2865, 2.1938, 1.7900, 2.3372, 2.1803, 1.9586, 2.6082, 2.2904], device='cuda:6'), covar=tensor([0.1459, 0.2164, 0.3262, 0.2660, 0.2815, 0.1699, 0.3307, 0.1895], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0187, 0.0235, 0.0253, 0.0244, 0.0199, 0.0213, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:29:12,619 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=78888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:29:27,938 INFO [finetune.py:976] (6/7) Epoch 14, batch 4450, loss[loss=0.2021, simple_loss=0.2784, pruned_loss=0.06297, over 4860.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2553, pruned_loss=0.05952, over 952708.39 frames. ], batch size: 49, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:29:28,613 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=78912.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:29:44,124 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=78936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:29:48,694 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.623e+02 2.075e+02 2.454e+02 4.700e+02, threshold=4.150e+02, percent-clipped=3.0 2023-03-26 17:30:01,641 INFO [finetune.py:976] (6/7) Epoch 14, batch 4500, loss[loss=0.2088, simple_loss=0.2712, pruned_loss=0.07319, over 4242.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2562, pruned_loss=0.05967, over 952556.79 frames. ], batch size: 65, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:30:04,121 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=78965.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:30:34,869 INFO [finetune.py:976] (6/7) Epoch 14, batch 4550, loss[loss=0.1986, simple_loss=0.2639, pruned_loss=0.06662, over 4732.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2583, pruned_loss=0.06056, over 954276.47 frames. ], batch size: 27, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:30:44,016 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79026.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:30:54,527 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.627e+02 1.835e+02 2.182e+02 4.419e+02, threshold=3.671e+02, percent-clipped=1.0 2023-03-26 17:31:08,636 INFO [finetune.py:976] (6/7) Epoch 14, batch 4600, loss[loss=0.1649, simple_loss=0.2354, pruned_loss=0.04723, over 4745.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2581, pruned_loss=0.06051, over 954942.55 frames. ], batch size: 27, lr: 3.53e-03, grad_scale: 64.0 2023-03-26 17:31:41,336 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4302, 2.2149, 2.2223, 2.5398, 2.7922, 2.4386, 2.1675, 1.8227], device='cuda:6'), covar=tensor([0.1981, 0.1825, 0.1650, 0.1339, 0.1828, 0.1034, 0.2022, 0.1721], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0213, 0.0192, 0.0243, 0.0186, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:31:42,445 INFO [finetune.py:976] (6/7) Epoch 14, batch 4650, loss[loss=0.1908, simple_loss=0.2538, pruned_loss=0.06384, over 4894.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2567, pruned_loss=0.06089, over 955259.86 frames. ], batch size: 32, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:31:43,603 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 17:31:44,945 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9050, 4.9535, 4.4961, 2.3990, 4.9013, 3.9004, 1.0563, 3.4793], device='cuda:6'), covar=tensor([0.2402, 0.1535, 0.1602, 0.3113, 0.0726, 0.0794, 0.4482, 0.1282], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0174, 0.0161, 0.0128, 0.0157, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 17:31:45,550 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:32:02,944 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.520e+02 1.886e+02 2.415e+02 4.094e+02, threshold=3.771e+02, percent-clipped=1.0 2023-03-26 17:32:20,983 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5053, 1.0870, 0.7258, 1.4028, 2.0739, 0.6142, 1.3263, 1.4146], device='cuda:6'), covar=tensor([0.1526, 0.2146, 0.1804, 0.1316, 0.1806, 0.2012, 0.1524, 0.1935], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:32:24,490 INFO [finetune.py:976] (6/7) Epoch 14, batch 4700, loss[loss=0.1808, simple_loss=0.2372, pruned_loss=0.06214, over 4921.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2519, pruned_loss=0.05908, over 955196.53 frames. ], batch size: 46, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:32:25,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6508, 1.6121, 2.0405, 1.2938, 1.7064, 1.8796, 1.5617, 2.0353], device='cuda:6'), covar=tensor([0.1488, 0.2190, 0.1316, 0.1854, 0.0940, 0.1460, 0.2826, 0.0990], device='cuda:6'), in_proj_covar=tensor([0.0196, 0.0206, 0.0193, 0.0192, 0.0178, 0.0215, 0.0218, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:32:26,328 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79164.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:32:58,005 INFO [finetune.py:976] (6/7) Epoch 14, batch 4750, loss[loss=0.1744, simple_loss=0.2461, pruned_loss=0.0514, over 4829.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2499, pruned_loss=0.05838, over 954939.91 frames. ], batch size: 49, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:32:59,209 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79212.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:33:32,018 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.558e+01 1.654e+02 1.996e+02 2.359e+02 6.861e+02, threshold=3.993e+02, percent-clipped=2.0 2023-03-26 17:33:36,542 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3540, 1.3260, 1.7609, 1.6627, 1.4839, 3.3355, 1.1819, 1.3999], device='cuda:6'), covar=tensor([0.1257, 0.2312, 0.1440, 0.1190, 0.1901, 0.0285, 0.2031, 0.2367], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 17:33:51,146 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:33:51,678 INFO [finetune.py:976] (6/7) Epoch 14, batch 4800, loss[loss=0.2234, simple_loss=0.2878, pruned_loss=0.07951, over 4821.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2529, pruned_loss=0.05918, over 957294.29 frames. ], batch size: 47, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:34:10,264 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79285.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:34:24,575 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2980, 1.4540, 1.5313, 0.8239, 1.4977, 1.7725, 1.7872, 1.4164], device='cuda:6'), covar=tensor([0.1149, 0.0800, 0.0590, 0.0688, 0.0583, 0.0651, 0.0360, 0.0872], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0123, 0.0129, 0.0130, 0.0126, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3064e-05, 1.1054e-04, 8.8500e-05, 9.2885e-05, 9.2267e-05, 9.1314e-05, 1.0273e-04, 1.0546e-04], device='cuda:6') 2023-03-26 17:34:27,365 INFO [finetune.py:976] (6/7) Epoch 14, batch 4850, loss[loss=0.1695, simple_loss=0.2475, pruned_loss=0.04572, over 4789.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2556, pruned_loss=0.06014, over 956376.61 frames. ], batch size: 29, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:34:35,462 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:34:42,732 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79333.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:34:49,131 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.639e+02 1.937e+02 2.312e+02 4.640e+02, threshold=3.873e+02, percent-clipped=1.0 2023-03-26 17:34:51,062 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79346.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:35:00,513 INFO [finetune.py:976] (6/7) Epoch 14, batch 4900, loss[loss=0.2137, simple_loss=0.29, pruned_loss=0.06872, over 4809.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.2574, pruned_loss=0.06107, over 953828.36 frames. ], batch size: 41, lr: 3.53e-03, grad_scale: 32.0 2023-03-26 17:35:03,434 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79364.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:35:03,805 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 17:35:11,510 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79375.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:35:23,211 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79394.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:35:34,335 INFO [finetune.py:976] (6/7) Epoch 14, batch 4950, loss[loss=0.2196, simple_loss=0.2708, pruned_loss=0.08423, over 4160.00 frames. ], tot_loss[loss=0.191, simple_loss=0.2587, pruned_loss=0.06169, over 953110.53 frames. ], batch size: 65, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:35:45,286 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:35:51,942 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79436.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:35:55,997 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.541e+02 1.877e+02 2.434e+02 3.585e+02, threshold=3.755e+02, percent-clipped=0.0 2023-03-26 17:36:07,907 INFO [finetune.py:976] (6/7) Epoch 14, batch 5000, loss[loss=0.1709, simple_loss=0.2396, pruned_loss=0.05108, over 4683.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2568, pruned_loss=0.06076, over 953631.31 frames. ], batch size: 23, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:36:41,417 INFO [finetune.py:976] (6/7) Epoch 14, batch 5050, loss[loss=0.2104, simple_loss=0.2662, pruned_loss=0.07733, over 4817.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2536, pruned_loss=0.05928, over 955404.94 frames. ], batch size: 41, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:36:55,528 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9004, 1.8444, 2.1771, 1.4361, 1.9121, 2.1760, 1.7452, 2.3583], device='cuda:6'), covar=tensor([0.1354, 0.1944, 0.1481, 0.1889, 0.0935, 0.1447, 0.2630, 0.0893], device='cuda:6'), in_proj_covar=tensor([0.0195, 0.0205, 0.0192, 0.0190, 0.0176, 0.0214, 0.0216, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:36:58,568 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8882, 1.8405, 1.6648, 1.9786, 1.3607, 4.6334, 1.6780, 2.1796], device='cuda:6'), covar=tensor([0.3162, 0.2335, 0.2035, 0.2168, 0.1688, 0.0134, 0.2359, 0.1217], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 17:37:02,698 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.152e+02 1.527e+02 1.776e+02 2.127e+02 3.568e+02, threshold=3.553e+02, percent-clipped=0.0 2023-03-26 17:37:05,291 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9357, 1.1192, 1.9129, 1.8554, 1.7274, 1.6564, 1.7413, 1.8184], device='cuda:6'), covar=tensor([0.3755, 0.3987, 0.3424, 0.3601, 0.4475, 0.3481, 0.4306, 0.3088], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0241, 0.0258, 0.0268, 0.0266, 0.0239, 0.0280, 0.0236], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:37:07,043 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8832, 3.8588, 3.7433, 1.8308, 3.9796, 2.9673, 1.0350, 2.8119], device='cuda:6'), covar=tensor([0.2201, 0.2013, 0.1395, 0.3338, 0.1013, 0.0992, 0.4358, 0.1534], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0173, 0.0160, 0.0127, 0.0157, 0.0122, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 17:37:11,908 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.03 vs. limit=5.0 2023-03-26 17:37:14,550 INFO [finetune.py:976] (6/7) Epoch 14, batch 5100, loss[loss=0.1742, simple_loss=0.2327, pruned_loss=0.05783, over 4839.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2503, pruned_loss=0.05794, over 956580.89 frames. ], batch size: 47, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:37:41,614 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:37:57,943 INFO [finetune.py:976] (6/7) Epoch 14, batch 5150, loss[loss=0.2119, simple_loss=0.2783, pruned_loss=0.0727, over 4710.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2516, pruned_loss=0.05889, over 954605.17 frames. ], batch size: 59, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:38:04,616 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79621.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:20,364 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79641.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 17:38:21,528 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.601e+02 1.924e+02 2.369e+02 3.228e+02, threshold=3.849e+02, percent-clipped=0.0 2023-03-26 17:38:23,487 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79646.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:33,788 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7251, 1.4952, 1.5262, 1.5297, 1.9107, 1.8219, 1.6525, 1.4288], device='cuda:6'), covar=tensor([0.0326, 0.0326, 0.0521, 0.0336, 0.0257, 0.0497, 0.0363, 0.0449], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0109, 0.0143, 0.0115, 0.0101, 0.0108, 0.0097, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3374e-05, 8.4742e-05, 1.1284e-04, 8.9012e-05, 7.9074e-05, 7.9550e-05, 7.3143e-05, 8.3268e-05], device='cuda:6') 2023-03-26 17:38:38,693 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79657.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:38:41,520 INFO [finetune.py:976] (6/7) Epoch 14, batch 5200, loss[loss=0.2101, simple_loss=0.2823, pruned_loss=0.06895, over 4901.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2544, pruned_loss=0.0597, over 954359.67 frames. ], batch size: 35, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:38:50,981 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79669.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:10,039 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79689.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:20,362 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 17:39:27,297 INFO [finetune.py:976] (6/7) Epoch 14, batch 5250, loss[loss=0.2047, simple_loss=0.2804, pruned_loss=0.06453, over 4818.00 frames. ], tot_loss[loss=0.1895, simple_loss=0.2573, pruned_loss=0.06083, over 955472.54 frames. ], batch size: 40, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:39:31,683 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.76 vs. limit=5.0 2023-03-26 17:39:32,136 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:33,327 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:39:40,420 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79731.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:39:49,076 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.058e+02 1.683e+02 1.987e+02 2.478e+02 3.642e+02, threshold=3.974e+02, percent-clipped=0.0 2023-03-26 17:39:59,973 INFO [finetune.py:976] (6/7) Epoch 14, batch 5300, loss[loss=0.1901, simple_loss=0.2519, pruned_loss=0.06413, over 4272.00 frames. ], tot_loss[loss=0.1908, simple_loss=0.2592, pruned_loss=0.06121, over 954794.79 frames. ], batch size: 65, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:40:00,070 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4407, 1.3540, 1.6684, 1.7623, 1.4816, 3.2450, 1.2470, 1.3728], device='cuda:6'), covar=tensor([0.1247, 0.2444, 0.1490, 0.1234, 0.2045, 0.0320, 0.2081, 0.2439], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0078, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 17:40:00,725 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=79762.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:40:33,374 INFO [finetune.py:976] (6/7) Epoch 14, batch 5350, loss[loss=0.2869, simple_loss=0.3404, pruned_loss=0.1167, over 4150.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2594, pruned_loss=0.06101, over 955603.88 frames. ], batch size: 65, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:40:41,365 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=79823.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:40:55,392 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.503e+02 1.776e+02 2.291e+02 4.117e+02, threshold=3.553e+02, percent-clipped=3.0 2023-03-26 17:41:06,813 INFO [finetune.py:976] (6/7) Epoch 14, batch 5400, loss[loss=0.2022, simple_loss=0.2674, pruned_loss=0.06848, over 4898.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2561, pruned_loss=0.05959, over 955629.93 frames. ], batch size: 35, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:41:21,964 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8316, 1.6550, 2.1129, 1.3427, 1.9293, 2.0775, 1.6005, 2.2901], device='cuda:6'), covar=tensor([0.1313, 0.2042, 0.1426, 0.1833, 0.0881, 0.1385, 0.2617, 0.0799], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0204, 0.0192, 0.0190, 0.0176, 0.0214, 0.0217, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:41:40,251 INFO [finetune.py:976] (6/7) Epoch 14, batch 5450, loss[loss=0.1623, simple_loss=0.2283, pruned_loss=0.0482, over 4736.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2517, pruned_loss=0.05799, over 955366.72 frames. ], batch size: 59, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:41:59,677 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=79941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:41:59,695 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79941.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 17:42:00,807 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.485e+02 1.773e+02 2.175e+02 4.141e+02, threshold=3.546e+02, percent-clipped=3.0 2023-03-26 17:42:14,248 INFO [finetune.py:976] (6/7) Epoch 14, batch 5500, loss[loss=0.2161, simple_loss=0.2733, pruned_loss=0.07949, over 4771.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2482, pruned_loss=0.05644, over 954824.72 frames. ], batch size: 54, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:42:32,375 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=79989.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:42:32,405 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=79989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:42:54,668 INFO [finetune.py:976] (6/7) Epoch 14, batch 5550, loss[loss=0.1766, simple_loss=0.2538, pruned_loss=0.04972, over 4921.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.25, pruned_loss=0.05755, over 953562.55 frames. ], batch size: 38, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:42:55,988 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80013.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:42:58,457 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5007, 1.4108, 1.2735, 1.4362, 1.8166, 1.6730, 1.4296, 1.2798], device='cuda:6'), covar=tensor([0.0312, 0.0303, 0.0647, 0.0303, 0.0207, 0.0429, 0.0326, 0.0435], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0142, 0.0114, 0.0101, 0.0107, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3007e-05, 8.4518e-05, 1.1264e-04, 8.8430e-05, 7.8872e-05, 7.9505e-05, 7.3365e-05, 8.3366e-05], device='cuda:6') 2023-03-26 17:43:00,263 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80020.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:07,420 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80031.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:43:11,545 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80037.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:15,066 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.609e+02 1.862e+02 2.441e+02 4.163e+02, threshold=3.724e+02, percent-clipped=2.0 2023-03-26 17:43:25,571 INFO [finetune.py:976] (6/7) Epoch 14, batch 5600, loss[loss=0.2091, simple_loss=0.2791, pruned_loss=0.06952, over 4884.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2563, pruned_loss=0.06004, over 954946.80 frames. ], batch size: 32, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:43:29,677 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80068.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:43:36,043 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80079.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 17:43:50,478 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2833, 2.2592, 1.7570, 2.4956, 2.3056, 1.9379, 2.8529, 2.3388], device='cuda:6'), covar=tensor([0.1352, 0.2430, 0.3047, 0.2604, 0.2675, 0.1634, 0.2900, 0.1805], device='cuda:6'), in_proj_covar=tensor([0.0179, 0.0186, 0.0233, 0.0253, 0.0243, 0.0199, 0.0212, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:44:12,670 INFO [finetune.py:976] (6/7) Epoch 14, batch 5650, loss[loss=0.2038, simple_loss=0.2664, pruned_loss=0.07061, over 4857.00 frames. ], tot_loss[loss=0.1894, simple_loss=0.2578, pruned_loss=0.0605, over 955272.32 frames. ], batch size: 31, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:44:21,802 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80118.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:44:41,677 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.115e+02 1.565e+02 1.838e+02 2.201e+02 4.652e+02, threshold=3.676e+02, percent-clipped=2.0 2023-03-26 17:44:52,833 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8654, 3.4342, 3.5944, 3.7623, 3.6738, 3.4165, 3.9182, 1.4394], device='cuda:6'), covar=tensor([0.0904, 0.0894, 0.0798, 0.0897, 0.1316, 0.1500, 0.0771, 0.4891], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0243, 0.0272, 0.0292, 0.0330, 0.0282, 0.0299, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:44:56,307 INFO [finetune.py:976] (6/7) Epoch 14, batch 5700, loss[loss=0.2041, simple_loss=0.2506, pruned_loss=0.07877, over 4278.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2548, pruned_loss=0.0602, over 936782.57 frames. ], batch size: 18, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:45:28,232 INFO [finetune.py:976] (6/7) Epoch 15, batch 0, loss[loss=0.203, simple_loss=0.2698, pruned_loss=0.06809, over 4857.00 frames. ], tot_loss[loss=0.203, simple_loss=0.2698, pruned_loss=0.06809, over 4857.00 frames. ], batch size: 44, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:45:28,232 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 17:45:31,659 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1126, 1.9918, 1.5324, 0.7703, 1.8314, 1.8326, 1.6899, 1.9020], device='cuda:6'), covar=tensor([0.0824, 0.0602, 0.1266, 0.1616, 0.1056, 0.1747, 0.1784, 0.0652], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0195, 0.0200, 0.0183, 0.0213, 0.0207, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:45:42,542 INFO [finetune.py:1010] (6/7) Epoch 15, validation: loss=0.1586, simple_loss=0.2288, pruned_loss=0.0442, over 2265189.00 frames. 2023-03-26 17:45:42,542 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 17:45:42,691 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1822, 2.0508, 1.7125, 2.0689, 2.1623, 1.8403, 2.4076, 2.1947], device='cuda:6'), covar=tensor([0.1356, 0.2296, 0.3135, 0.2680, 0.2633, 0.1689, 0.3474, 0.1767], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0187, 0.0234, 0.0255, 0.0245, 0.0200, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:46:12,251 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6661, 1.1831, 0.8166, 1.5203, 1.9173, 1.0909, 1.4150, 1.5961], device='cuda:6'), covar=tensor([0.1477, 0.2081, 0.1976, 0.1248, 0.2095, 0.2053, 0.1439, 0.1895], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 17:46:15,193 INFO [finetune.py:976] (6/7) Epoch 15, batch 50, loss[loss=0.144, simple_loss=0.213, pruned_loss=0.03752, over 4829.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2581, pruned_loss=0.06119, over 216446.88 frames. ], batch size: 47, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:46:17,639 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80241.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:46:17,691 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6750, 1.5494, 1.3840, 1.6854, 1.9741, 1.7091, 1.2616, 1.3887], device='cuda:6'), covar=tensor([0.2203, 0.2163, 0.2005, 0.1671, 0.1758, 0.1286, 0.2548, 0.1912], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0207, 0.0211, 0.0191, 0.0241, 0.0185, 0.0215, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:46:18,753 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.470e+02 1.896e+02 2.201e+02 3.299e+02, threshold=3.792e+02, percent-clipped=0.0 2023-03-26 17:46:21,720 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4232, 1.4307, 1.6535, 1.6334, 1.5250, 3.2799, 1.2710, 1.4434], device='cuda:6'), covar=tensor([0.0988, 0.1875, 0.1128, 0.1024, 0.1668, 0.0266, 0.1683, 0.1770], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0080, 0.0073, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 17:46:37,484 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 17:46:43,572 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8169, 1.8192, 1.9134, 1.1873, 1.9600, 1.9179, 1.8068, 1.6609], device='cuda:6'), covar=tensor([0.0541, 0.0649, 0.0584, 0.0877, 0.0644, 0.0646, 0.0581, 0.0952], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0132, 0.0141, 0.0123, 0.0123, 0.0140, 0.0141, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:46:45,391 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80283.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:46:48,315 INFO [finetune.py:976] (6/7) Epoch 15, batch 100, loss[loss=0.2237, simple_loss=0.2787, pruned_loss=0.08431, over 4900.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.252, pruned_loss=0.0589, over 381410.11 frames. ], batch size: 35, lr: 3.52e-03, grad_scale: 32.0 2023-03-26 17:46:48,990 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:03,370 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-26 17:47:05,085 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80313.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:11,837 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-26 17:47:21,594 INFO [finetune.py:976] (6/7) Epoch 15, batch 150, loss[loss=0.1818, simple_loss=0.2518, pruned_loss=0.05584, over 4711.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2464, pruned_loss=0.05724, over 509295.34 frames. ], batch size: 23, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:47:25,131 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.875e+01 1.589e+02 1.860e+02 2.189e+02 4.694e+02, threshold=3.721e+02, percent-clipped=1.0 2023-03-26 17:47:25,882 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80344.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:37,132 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80361.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:47:42,670 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4877, 1.2428, 1.2852, 1.3302, 1.6669, 1.5171, 1.4013, 1.2622], device='cuda:6'), covar=tensor([0.0300, 0.0297, 0.0631, 0.0322, 0.0258, 0.0520, 0.0314, 0.0433], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0108, 0.0141, 0.0113, 0.0100, 0.0107, 0.0097, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2362e-05, 8.3734e-05, 1.1152e-04, 8.7637e-05, 7.8303e-05, 7.9081e-05, 7.2584e-05, 8.2632e-05], device='cuda:6') 2023-03-26 17:47:54,530 INFO [finetune.py:976] (6/7) Epoch 15, batch 200, loss[loss=0.1903, simple_loss=0.2411, pruned_loss=0.06973, over 4790.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2461, pruned_loss=0.05727, over 606773.72 frames. ], batch size: 26, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:47:58,239 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:48:16,700 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80418.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:48:34,169 INFO [finetune.py:976] (6/7) Epoch 15, batch 250, loss[loss=0.1192, simple_loss=0.1841, pruned_loss=0.02718, over 4809.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2499, pruned_loss=0.05896, over 685060.67 frames. ], batch size: 25, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:48:37,162 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.156e+02 1.638e+02 2.049e+02 2.410e+02 5.367e+02, threshold=4.098e+02, percent-clipped=2.0 2023-03-26 17:48:48,466 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:48:58,603 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80463.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:49:00,361 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80466.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:49:02,791 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.9330, 1.7959, 1.9928, 1.2770, 2.0236, 2.1243, 2.1687, 1.7814], device='cuda:6'), covar=tensor([0.0779, 0.0647, 0.0360, 0.0536, 0.0417, 0.0752, 0.0322, 0.0623], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0152, 0.0123, 0.0129, 0.0129, 0.0127, 0.0141, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.2697e-05, 1.1063e-04, 8.8056e-05, 9.2576e-05, 9.1544e-05, 9.1827e-05, 1.0211e-04, 1.0448e-04], device='cuda:6') 2023-03-26 17:49:16,899 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.50 vs. limit=5.0 2023-03-26 17:49:20,044 INFO [finetune.py:976] (6/7) Epoch 15, batch 300, loss[loss=0.1601, simple_loss=0.2372, pruned_loss=0.04155, over 4729.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2538, pruned_loss=0.0601, over 742521.09 frames. ], batch size: 54, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:49:24,811 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80494.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:49:46,733 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7896, 2.4360, 1.9368, 0.9331, 2.2958, 2.2272, 1.9216, 2.1862], device='cuda:6'), covar=tensor([0.0626, 0.0750, 0.1423, 0.2014, 0.1288, 0.1857, 0.1940, 0.0864], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0194, 0.0200, 0.0182, 0.0213, 0.0206, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:50:03,919 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80524.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:50:04,544 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2436, 1.3227, 1.5757, 1.1099, 1.2332, 1.4702, 1.3161, 1.6641], device='cuda:6'), covar=tensor([0.1244, 0.1980, 0.1236, 0.1430, 0.1011, 0.1265, 0.2704, 0.0886], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0203, 0.0191, 0.0189, 0.0175, 0.0213, 0.0217, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:50:12,916 INFO [finetune.py:976] (6/7) Epoch 15, batch 350, loss[loss=0.2223, simple_loss=0.2909, pruned_loss=0.07682, over 4898.00 frames. ], tot_loss[loss=0.19, simple_loss=0.2568, pruned_loss=0.06158, over 790086.40 frames. ], batch size: 43, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:50:14,269 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2791, 2.0338, 2.7315, 1.6170, 2.4307, 2.6131, 1.8950, 2.7388], device='cuda:6'), covar=tensor([0.1392, 0.2033, 0.1547, 0.2241, 0.0837, 0.1599, 0.2483, 0.0903], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0203, 0.0191, 0.0190, 0.0175, 0.0213, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:50:16,429 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.511e+02 1.809e+02 2.185e+02 3.892e+02, threshold=3.618e+02, percent-clipped=0.0 2023-03-26 17:50:24,825 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:50:30,987 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 17:50:47,436 INFO [finetune.py:976] (6/7) Epoch 15, batch 400, loss[loss=0.2081, simple_loss=0.2837, pruned_loss=0.06621, over 4744.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2577, pruned_loss=0.06105, over 828002.72 frames. ], batch size: 54, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:51:29,122 INFO [finetune.py:976] (6/7) Epoch 15, batch 450, loss[loss=0.1721, simple_loss=0.2422, pruned_loss=0.05098, over 4895.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2567, pruned_loss=0.06056, over 856820.48 frames. ], batch size: 35, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:51:29,783 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80639.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:51:32,681 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.580e+02 1.854e+02 2.177e+02 4.594e+02, threshold=3.707e+02, percent-clipped=2.0 2023-03-26 17:52:03,114 INFO [finetune.py:976] (6/7) Epoch 15, batch 500, loss[loss=0.1641, simple_loss=0.2367, pruned_loss=0.04571, over 4784.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2539, pruned_loss=0.05968, over 876763.47 frames. ], batch size: 51, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:52:21,990 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3358, 2.3586, 2.0266, 2.0789, 2.6038, 2.8431, 2.3995, 2.2366], device='cuda:6'), covar=tensor([0.0374, 0.0404, 0.0552, 0.0360, 0.0330, 0.0445, 0.0607, 0.0418], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0142, 0.0114, 0.0101, 0.0107, 0.0097, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3258e-05, 8.4568e-05, 1.1262e-04, 8.8343e-05, 7.8933e-05, 7.9349e-05, 7.2951e-05, 8.3589e-05], device='cuda:6') 2023-03-26 17:52:26,223 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2631, 3.7405, 3.9009, 4.1091, 4.0200, 3.8164, 4.3525, 1.3767], device='cuda:6'), covar=tensor([0.0784, 0.0842, 0.0868, 0.1042, 0.1199, 0.1507, 0.0692, 0.5558], device='cuda:6'), in_proj_covar=tensor([0.0345, 0.0242, 0.0271, 0.0290, 0.0329, 0.0280, 0.0296, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:52:35,997 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80736.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:52:37,121 INFO [finetune.py:976] (6/7) Epoch 15, batch 550, loss[loss=0.1771, simple_loss=0.2497, pruned_loss=0.05222, over 4840.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2509, pruned_loss=0.05859, over 894150.72 frames. ], batch size: 49, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:52:38,489 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9082, 1.8291, 1.6940, 1.9655, 2.3585, 1.8856, 1.7552, 1.6848], device='cuda:6'), covar=tensor([0.1587, 0.1728, 0.1505, 0.1289, 0.1412, 0.1089, 0.2091, 0.1523], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0192, 0.0243, 0.0186, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:52:40,202 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.496e+02 1.725e+02 2.011e+02 3.976e+02, threshold=3.451e+02, percent-clipped=1.0 2023-03-26 17:52:44,398 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80749.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:01,314 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=80773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:10,746 INFO [finetune.py:976] (6/7) Epoch 15, batch 600, loss[loss=0.2094, simple_loss=0.2826, pruned_loss=0.06807, over 4822.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2521, pruned_loss=0.05947, over 908439.39 frames. ], batch size: 40, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:53:16,799 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80797.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:22,481 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-26 17:53:23,210 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-26 17:53:32,697 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:44,670 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=80834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:47,024 INFO [finetune.py:976] (6/7) Epoch 15, batch 650, loss[loss=0.1567, simple_loss=0.2461, pruned_loss=0.03362, over 4797.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2546, pruned_loss=0.06014, over 918698.89 frames. ], batch size: 29, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:53:48,855 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9643, 1.4080, 0.9276, 1.8421, 2.2301, 1.5942, 1.7527, 1.8373], device='cuda:6'), covar=tensor([0.1263, 0.1914, 0.1952, 0.1070, 0.1794, 0.1937, 0.1326, 0.1730], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0097, 0.0113, 0.0094, 0.0121, 0.0095, 0.0100, 0.0091], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 17:53:50,575 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.196e+02 1.643e+02 1.965e+02 2.358e+02 6.399e+02, threshold=3.929e+02, percent-clipped=5.0 2023-03-26 17:53:55,338 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=80850.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:53:55,366 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4961, 1.4234, 1.6068, 1.7025, 1.6011, 3.3169, 1.3043, 1.5539], device='cuda:6'), covar=tensor([0.1008, 0.1862, 0.1156, 0.1042, 0.1670, 0.0262, 0.1602, 0.1760], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0078, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 17:54:29,239 INFO [finetune.py:976] (6/7) Epoch 15, batch 700, loss[loss=0.3148, simple_loss=0.3581, pruned_loss=0.1357, over 4158.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2559, pruned_loss=0.0604, over 925705.47 frames. ], batch size: 65, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:54:58,396 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7233, 1.4724, 1.3197, 0.9172, 1.4847, 1.6192, 1.6897, 1.3278], device='cuda:6'), covar=tensor([0.0811, 0.0542, 0.0518, 0.0528, 0.0395, 0.0576, 0.0272, 0.0619], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0122, 0.0128, 0.0129, 0.0126, 0.0140, 0.0144], device='cuda:6'), out_proj_covar=tensor([9.1865e-05, 1.0940e-04, 8.7547e-05, 9.1770e-05, 9.1405e-05, 9.0883e-05, 1.0150e-04, 1.0400e-04], device='cuda:6') 2023-03-26 17:55:23,119 INFO [finetune.py:976] (6/7) Epoch 15, batch 750, loss[loss=0.1757, simple_loss=0.2433, pruned_loss=0.05407, over 4749.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2569, pruned_loss=0.06075, over 930754.45 frames. ], batch size: 54, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:55:23,823 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=80939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:55:26,166 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.628e+02 1.856e+02 2.303e+02 3.612e+02, threshold=3.712e+02, percent-clipped=0.0 2023-03-26 17:55:56,364 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=80987.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:55:56,904 INFO [finetune.py:976] (6/7) Epoch 15, batch 800, loss[loss=0.1624, simple_loss=0.2425, pruned_loss=0.04122, over 4922.00 frames. ], tot_loss[loss=0.1881, simple_loss=0.2566, pruned_loss=0.05985, over 937998.93 frames. ], batch size: 38, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:56:02,892 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 17:56:38,263 INFO [finetune.py:976] (6/7) Epoch 15, batch 850, loss[loss=0.1611, simple_loss=0.2271, pruned_loss=0.0476, over 4770.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2541, pruned_loss=0.05888, over 943309.78 frames. ], batch size: 28, lr: 3.51e-03, grad_scale: 32.0 2023-03-26 17:56:41,290 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.679e+02 1.976e+02 2.340e+02 3.768e+02, threshold=3.952e+02, percent-clipped=2.0 2023-03-26 17:56:44,983 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81049.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:11,961 INFO [finetune.py:976] (6/7) Epoch 15, batch 900, loss[loss=0.1712, simple_loss=0.2378, pruned_loss=0.05232, over 4835.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.251, pruned_loss=0.0576, over 944864.47 frames. ], batch size: 30, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:57:14,451 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:17,459 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81097.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:25,175 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4444, 1.2047, 1.2132, 1.3370, 1.6231, 1.4605, 1.3463, 1.2189], device='cuda:6'), covar=tensor([0.0278, 0.0302, 0.0576, 0.0276, 0.0238, 0.0484, 0.0288, 0.0376], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0111, 0.0144, 0.0115, 0.0102, 0.0109, 0.0098, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.4032e-05, 8.5559e-05, 1.1394e-04, 8.9218e-05, 7.9871e-05, 8.0383e-05, 7.3880e-05, 8.4238e-05], device='cuda:6') 2023-03-26 17:57:29,871 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:31,674 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81119.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:38,663 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:57:45,614 INFO [finetune.py:976] (6/7) Epoch 15, batch 950, loss[loss=0.1609, simple_loss=0.2337, pruned_loss=0.04404, over 4731.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2507, pruned_loss=0.05812, over 949925.68 frames. ], batch size: 54, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:57:48,669 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.129e+01 1.456e+02 1.848e+02 2.216e+02 5.430e+02, threshold=3.695e+02, percent-clipped=2.0 2023-03-26 17:57:52,999 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81150.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:03,768 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:11,294 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81177.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:19,389 INFO [finetune.py:976] (6/7) Epoch 15, batch 1000, loss[loss=0.17, simple_loss=0.2589, pruned_loss=0.04056, over 4780.00 frames. ], tot_loss[loss=0.187, simple_loss=0.254, pruned_loss=0.06001, over 952570.91 frames. ], batch size: 29, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:58:25,510 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81198.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:58:37,144 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6791, 1.3963, 0.9338, 0.1922, 1.2066, 1.4327, 1.2753, 1.2567], device='cuda:6'), covar=tensor([0.0914, 0.1114, 0.1732, 0.2501, 0.1704, 0.2794, 0.2901, 0.1096], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0195, 0.0200, 0.0184, 0.0214, 0.0207, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 17:58:52,894 INFO [finetune.py:976] (6/7) Epoch 15, batch 1050, loss[loss=0.1566, simple_loss=0.2405, pruned_loss=0.03631, over 4845.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2574, pruned_loss=0.06054, over 953687.57 frames. ], batch size: 49, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:58:56,385 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.183e+02 1.566e+02 1.800e+02 2.282e+02 3.514e+02, threshold=3.601e+02, percent-clipped=0.0 2023-03-26 17:58:56,480 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7253, 4.0882, 3.7138, 1.9527, 4.1325, 3.1332, 0.9187, 3.0048], device='cuda:6'), covar=tensor([0.2436, 0.2149, 0.1643, 0.3477, 0.1106, 0.1012, 0.4693, 0.1469], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0172, 0.0158, 0.0127, 0.0155, 0.0121, 0.0144, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 17:59:31,698 INFO [finetune.py:976] (6/7) Epoch 15, batch 1100, loss[loss=0.2365, simple_loss=0.3016, pruned_loss=0.08565, over 4808.00 frames. ], tot_loss[loss=0.1901, simple_loss=0.2589, pruned_loss=0.0607, over 954653.71 frames. ], batch size: 45, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 17:59:43,361 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 17:59:49,389 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81301.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:00:16,532 INFO [finetune.py:976] (6/7) Epoch 15, batch 1150, loss[loss=0.2156, simple_loss=0.2828, pruned_loss=0.07421, over 4894.00 frames. ], tot_loss[loss=0.1907, simple_loss=0.2596, pruned_loss=0.06091, over 955129.91 frames. ], batch size: 36, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 18:00:21,054 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.13 vs. limit=5.0 2023-03-26 18:00:23,241 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.635e+02 2.084e+02 2.407e+02 3.907e+02, threshold=4.168e+02, percent-clipped=1.0 2023-03-26 18:00:41,629 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81360.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:00:42,873 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:00:59,901 INFO [finetune.py:976] (6/7) Epoch 15, batch 1200, loss[loss=0.1971, simple_loss=0.2748, pruned_loss=0.05972, over 4823.00 frames. ], tot_loss[loss=0.1891, simple_loss=0.2578, pruned_loss=0.06022, over 954939.86 frames. ], batch size: 33, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 18:01:03,414 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81392.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:01:27,559 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81429.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:01:35,102 INFO [finetune.py:976] (6/7) Epoch 15, batch 1250, loss[loss=0.1603, simple_loss=0.236, pruned_loss=0.04237, over 4709.00 frames. ], tot_loss[loss=0.187, simple_loss=0.2547, pruned_loss=0.0596, over 956364.97 frames. ], batch size: 23, lr: 3.51e-03, grad_scale: 64.0 2023-03-26 18:01:40,099 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:01:42,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.103e+01 1.546e+02 1.830e+02 2.259e+02 3.665e+02, threshold=3.660e+02, percent-clipped=0.0 2023-03-26 18:01:48,743 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7219, 1.5902, 1.5612, 1.5992, 1.0058, 2.9491, 1.0239, 1.4319], device='cuda:6'), covar=tensor([0.3106, 0.2320, 0.2050, 0.2375, 0.1858, 0.0267, 0.2558, 0.1297], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:01:57,142 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2156, 2.0877, 1.7250, 0.8073, 1.8274, 1.8578, 1.6493, 1.8449], device='cuda:6'), covar=tensor([0.0972, 0.0699, 0.1490, 0.1781, 0.1298, 0.1744, 0.2003, 0.0922], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0193, 0.0198, 0.0182, 0.0212, 0.0205, 0.0223, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:02:05,452 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81472.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:02:08,459 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81477.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:02:15,572 INFO [finetune.py:976] (6/7) Epoch 15, batch 1300, loss[loss=0.1847, simple_loss=0.2525, pruned_loss=0.05849, over 4751.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.251, pruned_loss=0.05833, over 955685.40 frames. ], batch size: 59, lr: 3.50e-03, grad_scale: 64.0 2023-03-26 18:02:24,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4328, 2.3492, 2.0694, 1.0636, 2.1494, 1.9415, 1.7735, 2.1482], device='cuda:6'), covar=tensor([0.0898, 0.0721, 0.1512, 0.1961, 0.1430, 0.1884, 0.1925, 0.0898], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0193, 0.0198, 0.0182, 0.0212, 0.0205, 0.0223, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:02:49,395 INFO [finetune.py:976] (6/7) Epoch 15, batch 1350, loss[loss=0.1911, simple_loss=0.2539, pruned_loss=0.06412, over 4097.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2524, pruned_loss=0.05977, over 955634.79 frames. ], batch size: 65, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:02:53,476 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.608e+02 1.859e+02 2.257e+02 3.880e+02, threshold=3.719e+02, percent-clipped=1.0 2023-03-26 18:02:53,622 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3464, 1.4411, 1.4261, 0.9625, 1.4775, 1.7139, 1.7903, 1.3130], device='cuda:6'), covar=tensor([0.0939, 0.0592, 0.0552, 0.0470, 0.0490, 0.0729, 0.0296, 0.0691], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0123, 0.0129, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3142e-05, 1.1050e-04, 8.8533e-05, 9.2727e-05, 9.2418e-05, 9.1947e-05, 1.0279e-04, 1.0546e-04], device='cuda:6') 2023-03-26 18:03:08,486 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0612, 2.1421, 1.7784, 1.7579, 2.4878, 2.5488, 2.1434, 2.0158], device='cuda:6'), covar=tensor([0.0306, 0.0348, 0.0604, 0.0380, 0.0280, 0.0562, 0.0355, 0.0397], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0142, 0.0113, 0.0100, 0.0107, 0.0097, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.2706e-05, 8.4356e-05, 1.1256e-04, 8.7489e-05, 7.8312e-05, 7.9289e-05, 7.2909e-05, 8.3219e-05], device='cuda:6') 2023-03-26 18:03:10,270 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4627, 1.4737, 1.3719, 0.9034, 1.5449, 1.7162, 1.7432, 1.2851], device='cuda:6'), covar=tensor([0.1168, 0.0747, 0.0565, 0.0657, 0.0509, 0.0621, 0.0363, 0.0843], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0123, 0.0129, 0.0130, 0.0127, 0.0142, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.3033e-05, 1.1043e-04, 8.8504e-05, 9.2639e-05, 9.2251e-05, 9.1907e-05, 1.0273e-04, 1.0539e-04], device='cuda:6') 2023-03-26 18:03:22,723 INFO [finetune.py:976] (6/7) Epoch 15, batch 1400, loss[loss=0.228, simple_loss=0.3126, pruned_loss=0.07169, over 4806.00 frames. ], tot_loss[loss=0.1897, simple_loss=0.257, pruned_loss=0.06124, over 956969.01 frames. ], batch size: 39, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:03:36,118 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6823, 3.8553, 3.6479, 1.9820, 3.9113, 2.9893, 0.9070, 2.7781], device='cuda:6'), covar=tensor([0.2408, 0.2031, 0.1464, 0.3276, 0.1031, 0.0962, 0.4480, 0.1411], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0174, 0.0158, 0.0128, 0.0156, 0.0121, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 18:03:36,754 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.1003, 4.4511, 4.6654, 4.9627, 4.8278, 4.5333, 5.2242, 1.5997], device='cuda:6'), covar=tensor([0.0628, 0.0831, 0.0648, 0.0758, 0.1016, 0.1498, 0.0471, 0.5499], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0242, 0.0271, 0.0290, 0.0328, 0.0280, 0.0295, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:03:56,018 INFO [finetune.py:976] (6/7) Epoch 15, batch 1450, loss[loss=0.1684, simple_loss=0.2347, pruned_loss=0.05108, over 4799.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2578, pruned_loss=0.06097, over 957911.88 frames. ], batch size: 25, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:04:00,100 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.620e+02 1.887e+02 2.237e+02 3.719e+02, threshold=3.774e+02, percent-clipped=1.0 2023-03-26 18:04:01,957 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6747, 1.5855, 1.4975, 1.6378, 1.0767, 3.6311, 1.4006, 1.8392], device='cuda:6'), covar=tensor([0.3293, 0.2546, 0.2177, 0.2366, 0.1878, 0.0179, 0.2415, 0.1287], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:04:07,957 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81655.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:04:09,563 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=81657.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:04:18,073 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81670.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:04:29,491 INFO [finetune.py:976] (6/7) Epoch 15, batch 1500, loss[loss=0.2141, simple_loss=0.2754, pruned_loss=0.07638, over 4044.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2596, pruned_loss=0.06128, over 958062.95 frames. ], batch size: 65, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:05:16,714 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=81731.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 18:05:20,803 INFO [finetune.py:976] (6/7) Epoch 15, batch 1550, loss[loss=0.1796, simple_loss=0.2532, pruned_loss=0.05298, over 4843.00 frames. ], tot_loss[loss=0.1911, simple_loss=0.2593, pruned_loss=0.06145, over 957820.36 frames. ], batch size: 44, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:05:24,957 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.530e+02 1.898e+02 2.293e+02 4.636e+02, threshold=3.795e+02, percent-clipped=1.0 2023-03-26 18:05:50,080 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81772.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:06:03,756 INFO [finetune.py:976] (6/7) Epoch 15, batch 1600, loss[loss=0.1776, simple_loss=0.239, pruned_loss=0.05806, over 4915.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2563, pruned_loss=0.06074, over 954895.43 frames. ], batch size: 43, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:06:25,748 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=81820.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:06:37,140 INFO [finetune.py:976] (6/7) Epoch 15, batch 1650, loss[loss=0.15, simple_loss=0.2183, pruned_loss=0.0408, over 4821.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2527, pruned_loss=0.05904, over 956133.50 frames. ], batch size: 40, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:06:40,762 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.362e+01 1.564e+02 1.826e+02 2.251e+02 4.924e+02, threshold=3.651e+02, percent-clipped=3.0 2023-03-26 18:06:46,326 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 18:06:54,466 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5963, 1.4643, 1.4422, 1.4454, 0.9128, 2.8942, 1.0549, 1.3891], device='cuda:6'), covar=tensor([0.3189, 0.2585, 0.2158, 0.2493, 0.2044, 0.0254, 0.2744, 0.1452], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0115, 0.0098, 0.0097, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:07:03,932 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-26 18:07:18,076 INFO [finetune.py:976] (6/7) Epoch 15, batch 1700, loss[loss=0.2517, simple_loss=0.296, pruned_loss=0.1037, over 4128.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.251, pruned_loss=0.05836, over 953965.73 frames. ], batch size: 65, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:07:24,281 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-26 18:07:28,012 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.50 vs. limit=5.0 2023-03-26 18:07:39,489 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4176, 2.2116, 1.7415, 0.8269, 1.8932, 1.9215, 1.7431, 1.9169], device='cuda:6'), covar=tensor([0.0889, 0.0716, 0.1535, 0.1967, 0.1314, 0.2248, 0.2073, 0.0852], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0194, 0.0199, 0.0183, 0.0213, 0.0207, 0.0224, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:07:44,239 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7948, 1.6091, 2.3882, 3.6825, 2.4944, 2.4460, 1.1405, 2.9295], device='cuda:6'), covar=tensor([0.1804, 0.1443, 0.1340, 0.0581, 0.0804, 0.1623, 0.1866, 0.0594], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0099, 0.0136, 0.0123, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:07:51,483 INFO [finetune.py:976] (6/7) Epoch 15, batch 1750, loss[loss=0.1947, simple_loss=0.2592, pruned_loss=0.06506, over 4869.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2528, pruned_loss=0.0592, over 952988.74 frames. ], batch size: 31, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:07:55,585 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.510e+02 1.914e+02 2.293e+02 4.004e+02, threshold=3.828e+02, percent-clipped=1.0 2023-03-26 18:08:02,955 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81955.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:04,187 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=81957.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:19,349 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5925, 3.3944, 3.2957, 1.6870, 3.5454, 2.6783, 0.7707, 2.3730], device='cuda:6'), covar=tensor([0.2314, 0.2176, 0.1536, 0.3264, 0.1157, 0.1039, 0.4369, 0.1567], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0172, 0.0157, 0.0127, 0.0156, 0.0121, 0.0144, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 18:08:24,912 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=81987.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:25,410 INFO [finetune.py:976] (6/7) Epoch 15, batch 1800, loss[loss=0.1893, simple_loss=0.2721, pruned_loss=0.05323, over 4819.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2552, pruned_loss=0.05973, over 953108.40 frames. ], batch size: 40, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:08:36,208 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82003.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:37,855 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82005.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:08:51,333 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0586, 0.9013, 0.8720, 0.4007, 0.8304, 1.0496, 1.1069, 0.9086], device='cuda:6'), covar=tensor([0.0765, 0.0579, 0.0524, 0.0493, 0.0539, 0.0605, 0.0391, 0.0620], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0151, 0.0123, 0.0129, 0.0130, 0.0127, 0.0141, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.2728e-05, 1.0994e-04, 8.8504e-05, 9.2230e-05, 9.1852e-05, 9.1658e-05, 1.0217e-04, 1.0532e-04], device='cuda:6') 2023-03-26 18:08:52,964 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82026.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 18:09:00,021 INFO [finetune.py:976] (6/7) Epoch 15, batch 1850, loss[loss=0.2008, simple_loss=0.2616, pruned_loss=0.07002, over 4915.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2565, pruned_loss=0.05977, over 953128.22 frames. ], batch size: 38, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:09:03,672 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.664e+02 1.894e+02 2.440e+02 3.763e+02, threshold=3.787e+02, percent-clipped=0.0 2023-03-26 18:09:06,723 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82048.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:09:11,016 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:09:33,285 INFO [finetune.py:976] (6/7) Epoch 15, batch 1900, loss[loss=0.1907, simple_loss=0.2604, pruned_loss=0.06046, over 4882.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2567, pruned_loss=0.05934, over 953200.06 frames. ], batch size: 32, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:09:51,818 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:10:16,118 INFO [finetune.py:976] (6/7) Epoch 15, batch 1950, loss[loss=0.1613, simple_loss=0.2375, pruned_loss=0.04258, over 4783.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.256, pruned_loss=0.0596, over 954128.57 frames. ], batch size: 26, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:10:24,223 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.964e+01 1.525e+02 1.906e+02 2.226e+02 4.434e+02, threshold=3.812e+02, percent-clipped=2.0 2023-03-26 18:11:01,528 INFO [finetune.py:976] (6/7) Epoch 15, batch 2000, loss[loss=0.1712, simple_loss=0.2372, pruned_loss=0.05261, over 4761.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2539, pruned_loss=0.05965, over 952591.29 frames. ], batch size: 54, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:11:38,380 INFO [finetune.py:976] (6/7) Epoch 15, batch 2050, loss[loss=0.1539, simple_loss=0.2176, pruned_loss=0.04513, over 4850.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2506, pruned_loss=0.05832, over 956114.85 frames. ], batch size: 44, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:11:42,513 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.417e+02 1.727e+02 2.274e+02 4.171e+02, threshold=3.454e+02, percent-clipped=1.0 2023-03-26 18:12:24,922 INFO [finetune.py:976] (6/7) Epoch 15, batch 2100, loss[loss=0.206, simple_loss=0.2731, pruned_loss=0.06951, over 4854.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2511, pruned_loss=0.05852, over 955838.74 frames. ], batch size: 49, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:12:54,220 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82326.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:02,491 INFO [finetune.py:976] (6/7) Epoch 15, batch 2150, loss[loss=0.1908, simple_loss=0.2733, pruned_loss=0.05418, over 4908.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2554, pruned_loss=0.06011, over 955036.99 frames. ], batch size: 43, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:13:06,113 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82343.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:06,660 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.137e+02 1.626e+02 1.861e+02 2.291e+02 4.001e+02, threshold=3.721e+02, percent-clipped=2.0 2023-03-26 18:13:07,995 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:09,491 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 18:13:16,996 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 18:13:21,308 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2822, 2.2268, 2.2725, 1.6427, 2.3529, 2.4565, 2.3577, 1.9912], device='cuda:6'), covar=tensor([0.0564, 0.0571, 0.0589, 0.0871, 0.0647, 0.0679, 0.0603, 0.0950], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0142, 0.0124, 0.0125, 0.0142, 0.0141, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:13:25,987 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:35,321 INFO [finetune.py:976] (6/7) Epoch 15, batch 2200, loss[loss=0.1678, simple_loss=0.244, pruned_loss=0.04574, over 4872.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2551, pruned_loss=0.05881, over 955173.97 frames. ], batch size: 31, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:13:48,052 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82407.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:50,427 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82411.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:13:51,165 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 18:14:08,110 INFO [finetune.py:976] (6/7) Epoch 15, batch 2250, loss[loss=0.2091, simple_loss=0.2844, pruned_loss=0.06687, over 4820.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2567, pruned_loss=0.05953, over 957434.95 frames. ], batch size: 47, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:14:08,261 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2103, 2.2635, 1.9456, 2.2690, 2.1526, 2.0929, 2.1706, 2.9568], device='cuda:6'), covar=tensor([0.4093, 0.4825, 0.3401, 0.4569, 0.4320, 0.2459, 0.4459, 0.1672], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0259, 0.0225, 0.0275, 0.0246, 0.0214, 0.0248, 0.0225], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:14:12,178 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.505e+02 1.711e+02 2.071e+02 3.892e+02, threshold=3.421e+02, percent-clipped=2.0 2023-03-26 18:14:38,250 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8146, 4.0740, 3.9013, 1.9138, 4.2021, 3.1996, 0.9417, 2.9759], device='cuda:6'), covar=tensor([0.2688, 0.1735, 0.1438, 0.3208, 0.0924, 0.0909, 0.4353, 0.1180], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0172, 0.0157, 0.0127, 0.0155, 0.0121, 0.0144, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 18:14:41,716 INFO [finetune.py:976] (6/7) Epoch 15, batch 2300, loss[loss=0.1905, simple_loss=0.2644, pruned_loss=0.05836, over 4838.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2572, pruned_loss=0.05984, over 957958.99 frames. ], batch size: 49, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:14:45,290 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82493.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:14:45,983 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-26 18:15:17,446 INFO [finetune.py:976] (6/7) Epoch 15, batch 2350, loss[loss=0.1841, simple_loss=0.2452, pruned_loss=0.06146, over 4922.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.255, pruned_loss=0.05926, over 958494.20 frames. ], batch size: 38, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:15:21,098 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.644e+02 1.984e+02 2.390e+02 4.799e+02, threshold=3.967e+02, percent-clipped=3.0 2023-03-26 18:15:30,636 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82554.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:15:40,565 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5131, 2.4439, 2.1645, 2.6114, 2.4731, 2.2443, 2.7871, 2.4769], device='cuda:6'), covar=tensor([0.1296, 0.1917, 0.2811, 0.2250, 0.2409, 0.1623, 0.2638, 0.1840], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0187, 0.0235, 0.0254, 0.0245, 0.0200, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:15:44,889 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:16:00,854 INFO [finetune.py:976] (6/7) Epoch 15, batch 2400, loss[loss=0.185, simple_loss=0.2404, pruned_loss=0.06478, over 4938.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2525, pruned_loss=0.05886, over 958360.83 frames. ], batch size: 33, lr: 3.50e-03, grad_scale: 32.0 2023-03-26 18:16:24,017 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2501, 2.1830, 1.7796, 2.2102, 2.1879, 1.9462, 2.5258, 2.2444], device='cuda:6'), covar=tensor([0.1306, 0.2039, 0.2859, 0.2428, 0.2363, 0.1599, 0.2897, 0.1706], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0187, 0.0235, 0.0254, 0.0245, 0.0200, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:16:37,313 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=82630.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:16:42,548 INFO [finetune.py:976] (6/7) Epoch 15, batch 2450, loss[loss=0.1829, simple_loss=0.238, pruned_loss=0.06386, over 4831.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.249, pruned_loss=0.05736, over 957674.91 frames. ], batch size: 33, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:16:45,699 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:16:46,163 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.277e+01 1.596e+02 1.937e+02 2.264e+02 4.235e+02, threshold=3.875e+02, percent-clipped=1.0 2023-03-26 18:16:52,708 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3229, 1.4242, 1.5761, 0.7866, 1.5294, 1.7412, 1.7974, 1.4219], device='cuda:6'), covar=tensor([0.0952, 0.0691, 0.0429, 0.0575, 0.0464, 0.0629, 0.0307, 0.0648], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0151, 0.0123, 0.0129, 0.0129, 0.0126, 0.0141, 0.0146], device='cuda:6'), out_proj_covar=tensor([9.2442e-05, 1.1000e-04, 8.8167e-05, 9.2167e-05, 9.1682e-05, 9.1264e-05, 1.0149e-04, 1.0547e-04], device='cuda:6') 2023-03-26 18:17:04,468 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6137, 1.5164, 1.4525, 1.5602, 1.2120, 3.0360, 1.1524, 1.6888], device='cuda:6'), covar=tensor([0.3465, 0.2588, 0.2203, 0.2528, 0.1828, 0.0251, 0.2613, 0.1266], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0114, 0.0097, 0.0097, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:17:18,146 INFO [finetune.py:976] (6/7) Epoch 15, batch 2500, loss[loss=0.1777, simple_loss=0.2527, pruned_loss=0.05137, over 4866.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2497, pruned_loss=0.05779, over 955897.06 frames. ], batch size: 44, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:17:20,059 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82691.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:17:20,763 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0559, 2.0924, 1.7326, 2.1480, 1.9627, 1.9829, 1.9792, 2.7013], device='cuda:6'), covar=tensor([0.4270, 0.4856, 0.3590, 0.4681, 0.4708, 0.2564, 0.4829, 0.1914], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0259, 0.0226, 0.0275, 0.0247, 0.0214, 0.0248, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:17:35,742 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82702.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:17:37,612 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 18:17:46,363 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=82711.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:17:57,257 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6785, 1.5310, 1.1140, 0.2701, 1.3472, 1.4758, 1.4371, 1.4393], device='cuda:6'), covar=tensor([0.0957, 0.0919, 0.1446, 0.2086, 0.1483, 0.2563, 0.2553, 0.0914], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0194, 0.0200, 0.0184, 0.0213, 0.0207, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:18:03,674 INFO [finetune.py:976] (6/7) Epoch 15, batch 2550, loss[loss=0.2261, simple_loss=0.2948, pruned_loss=0.0787, over 4933.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2539, pruned_loss=0.05859, over 956869.19 frames. ], batch size: 42, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:18:06,086 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5121, 1.4587, 1.3940, 1.5081, 1.2479, 3.5937, 1.4452, 1.8559], device='cuda:6'), covar=tensor([0.3499, 0.2595, 0.2230, 0.2473, 0.1740, 0.0179, 0.2712, 0.1302], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:18:07,766 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.574e+02 1.850e+02 2.269e+02 4.152e+02, threshold=3.700e+02, percent-clipped=3.0 2023-03-26 18:18:16,096 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0735, 1.9633, 1.7023, 2.0011, 2.0272, 1.7720, 2.2890, 2.1078], device='cuda:6'), covar=tensor([0.1323, 0.2274, 0.2869, 0.2498, 0.2426, 0.1571, 0.3298, 0.1587], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0188, 0.0236, 0.0255, 0.0247, 0.0201, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:18:17,866 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=82759.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:18:36,858 INFO [finetune.py:976] (6/7) Epoch 15, batch 2600, loss[loss=0.2157, simple_loss=0.2839, pruned_loss=0.07372, over 4889.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2556, pruned_loss=0.05903, over 957040.70 frames. ], batch size: 35, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:19:06,670 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7257, 0.7019, 1.7078, 1.6674, 1.5287, 1.5033, 1.5463, 1.6285], device='cuda:6'), covar=tensor([0.3342, 0.3549, 0.3094, 0.3127, 0.4223, 0.3277, 0.3847, 0.2906], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0238, 0.0256, 0.0268, 0.0266, 0.0239, 0.0280, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:19:10,653 INFO [finetune.py:976] (6/7) Epoch 15, batch 2650, loss[loss=0.1889, simple_loss=0.2469, pruned_loss=0.06545, over 4851.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2587, pruned_loss=0.06053, over 956711.58 frames. ], batch size: 47, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:19:14,266 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.056e+02 1.579e+02 1.879e+02 2.251e+02 6.929e+02, threshold=3.759e+02, percent-clipped=2.0 2023-03-26 18:19:17,850 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82849.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:19:18,473 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4385, 1.3711, 1.9980, 2.8854, 1.9439, 2.1040, 0.9841, 2.4968], device='cuda:6'), covar=tensor([0.1726, 0.1492, 0.1121, 0.0630, 0.0795, 0.1520, 0.1628, 0.0508], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0133, 0.0164, 0.0100, 0.0138, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:19:36,864 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 18:19:43,179 INFO [finetune.py:976] (6/7) Epoch 15, batch 2700, loss[loss=0.1817, simple_loss=0.2578, pruned_loss=0.05283, over 4816.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2572, pruned_loss=0.06012, over 955488.42 frames. ], batch size: 40, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:19:56,932 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3049, 2.2150, 1.9042, 2.2433, 2.1461, 2.0941, 2.1018, 2.9904], device='cuda:6'), covar=tensor([0.3554, 0.4774, 0.3378, 0.4708, 0.4532, 0.2256, 0.4541, 0.1479], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0258, 0.0224, 0.0274, 0.0246, 0.0213, 0.0247, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:20:08,063 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=82925.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:20:16,389 INFO [finetune.py:976] (6/7) Epoch 15, batch 2750, loss[loss=0.1634, simple_loss=0.208, pruned_loss=0.05938, over 4058.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2533, pruned_loss=0.05872, over 955699.29 frames. ], batch size: 17, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:20:20,501 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.574e+02 1.758e+02 2.107e+02 4.076e+02, threshold=3.515e+02, percent-clipped=2.0 2023-03-26 18:20:33,257 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=82963.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:20:49,687 INFO [finetune.py:976] (6/7) Epoch 15, batch 2800, loss[loss=0.1463, simple_loss=0.2245, pruned_loss=0.03406, over 4910.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2494, pruned_loss=0.05696, over 955181.78 frames. ], batch size: 32, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:21:07,087 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83002.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:21:08,657 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-26 18:21:11,852 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8754, 1.6378, 1.5349, 1.2929, 1.6602, 1.6822, 1.6492, 2.1959], device='cuda:6'), covar=tensor([0.4162, 0.4288, 0.3264, 0.3827, 0.3735, 0.2398, 0.3739, 0.1941], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0258, 0.0224, 0.0274, 0.0246, 0.0213, 0.0247, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:21:15,077 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-26 18:21:21,957 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83024.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:21:37,767 INFO [finetune.py:976] (6/7) Epoch 15, batch 2850, loss[loss=0.2012, simple_loss=0.2804, pruned_loss=0.061, over 4913.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2481, pruned_loss=0.05597, over 955883.02 frames. ], batch size: 43, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:21:41,399 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.604e+02 1.866e+02 2.227e+02 4.125e+02, threshold=3.733e+02, percent-clipped=3.0 2023-03-26 18:21:49,028 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83050.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:22:12,203 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3038, 2.1544, 2.3073, 1.6312, 2.1798, 2.3274, 2.4295, 1.8566], device='cuda:6'), covar=tensor([0.0623, 0.0646, 0.0726, 0.0949, 0.0658, 0.0787, 0.0619, 0.1178], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0136, 0.0144, 0.0124, 0.0125, 0.0142, 0.0143, 0.0165], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:22:15,028 INFO [finetune.py:976] (6/7) Epoch 15, batch 2900, loss[loss=0.2319, simple_loss=0.2904, pruned_loss=0.08667, over 4738.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2516, pruned_loss=0.05769, over 954583.81 frames. ], batch size: 54, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:22:57,706 INFO [finetune.py:976] (6/7) Epoch 15, batch 2950, loss[loss=0.1507, simple_loss=0.2267, pruned_loss=0.03734, over 4776.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2548, pruned_loss=0.05844, over 955419.04 frames. ], batch size: 26, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:23:01,328 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.293e+02 1.748e+02 2.030e+02 2.368e+02 3.585e+02, threshold=4.059e+02, percent-clipped=0.0 2023-03-26 18:23:08,897 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83149.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:23:19,994 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6608, 1.1797, 0.8469, 1.5276, 2.0023, 1.0851, 1.3904, 1.5995], device='cuda:6'), covar=tensor([0.1420, 0.1999, 0.1911, 0.1159, 0.1875, 0.1949, 0.1345, 0.1644], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:23:33,510 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3435, 3.8439, 3.9341, 4.2191, 4.1460, 3.8527, 4.4433, 1.4499], device='cuda:6'), covar=tensor([0.0687, 0.0769, 0.0796, 0.0893, 0.1138, 0.1472, 0.0622, 0.5054], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0247, 0.0279, 0.0295, 0.0336, 0.0286, 0.0303, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:23:37,596 INFO [finetune.py:976] (6/7) Epoch 15, batch 3000, loss[loss=0.198, simple_loss=0.2664, pruned_loss=0.06486, over 4748.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2568, pruned_loss=0.05947, over 956490.84 frames. ], batch size: 27, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:23:37,596 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 18:23:46,679 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8804, 3.5212, 3.5852, 3.8012, 3.6362, 3.5521, 3.9787, 1.3466], device='cuda:6'), covar=tensor([0.0922, 0.0974, 0.0924, 0.0953, 0.1513, 0.1559, 0.0886, 0.5331], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0247, 0.0279, 0.0294, 0.0336, 0.0286, 0.0303, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:23:46,929 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5194, 1.5157, 1.5949, 1.5870, 1.7023, 3.0833, 1.4152, 1.6135], device='cuda:6'), covar=tensor([0.0921, 0.1745, 0.0969, 0.0952, 0.1457, 0.0307, 0.1422, 0.1626], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0073, 0.0077, 0.0092, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:23:48,368 INFO [finetune.py:1010] (6/7) Epoch 15, validation: loss=0.1564, simple_loss=0.2269, pruned_loss=0.04296, over 2265189.00 frames. 2023-03-26 18:23:48,369 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 18:23:59,728 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83196.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:24:00,271 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83197.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:24:19,117 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9365, 3.8659, 3.7562, 2.1940, 4.0266, 3.1531, 1.1545, 2.9117], device='cuda:6'), covar=tensor([0.2649, 0.1413, 0.1239, 0.2773, 0.0812, 0.0827, 0.3744, 0.1266], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0174, 0.0159, 0.0127, 0.0157, 0.0122, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 18:24:21,548 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83225.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:24:24,373 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.62 vs. limit=5.0 2023-03-26 18:24:30,315 INFO [finetune.py:976] (6/7) Epoch 15, batch 3050, loss[loss=0.1671, simple_loss=0.2393, pruned_loss=0.04743, over 4915.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2566, pruned_loss=0.05875, over 956597.06 frames. ], batch size: 38, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:24:32,729 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6286, 1.5492, 1.4949, 1.6902, 0.9445, 3.5550, 1.3884, 1.8183], device='cuda:6'), covar=tensor([0.3280, 0.2511, 0.2121, 0.2228, 0.1910, 0.0172, 0.2450, 0.1272], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:24:34,915 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.542e+02 1.763e+02 2.135e+02 3.801e+02, threshold=3.526e+02, percent-clipped=0.0 2023-03-26 18:24:44,050 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83257.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:24:45,872 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4497, 1.5288, 1.6295, 0.8429, 1.6111, 1.9097, 1.8572, 1.4294], device='cuda:6'), covar=tensor([0.0839, 0.0609, 0.0439, 0.0568, 0.0388, 0.0521, 0.0288, 0.0680], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0122, 0.0127, 0.0128, 0.0125, 0.0139, 0.0145], device='cuda:6'), out_proj_covar=tensor([9.1206e-05, 1.0902e-04, 8.7981e-05, 9.1305e-05, 9.0558e-05, 9.0508e-05, 1.0043e-04, 1.0509e-04], device='cuda:6') 2023-03-26 18:24:54,194 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:01,300 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83284.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:04,096 INFO [finetune.py:976] (6/7) Epoch 15, batch 3100, loss[loss=0.2018, simple_loss=0.2655, pruned_loss=0.06903, over 4845.00 frames. ], tot_loss[loss=0.1862, simple_loss=0.2555, pruned_loss=0.05847, over 958665.61 frames. ], batch size: 49, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:25:24,780 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83319.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:25:37,274 INFO [finetune.py:976] (6/7) Epoch 15, batch 3150, loss[loss=0.2097, simple_loss=0.2699, pruned_loss=0.07479, over 4867.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2526, pruned_loss=0.05738, over 959362.36 frames. ], batch size: 34, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:25:41,380 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.524e+02 1.821e+02 2.258e+02 3.585e+02, threshold=3.643e+02, percent-clipped=2.0 2023-03-26 18:25:42,013 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1997, 1.9423, 1.6430, 1.9396, 1.8750, 1.8891, 1.8852, 2.6847], device='cuda:6'), covar=tensor([0.4064, 0.5048, 0.3745, 0.4518, 0.4635, 0.2616, 0.4327, 0.1782], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0258, 0.0225, 0.0273, 0.0245, 0.0213, 0.0247, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:25:42,611 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83345.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:26:11,637 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 18:26:12,519 INFO [finetune.py:976] (6/7) Epoch 15, batch 3200, loss[loss=0.189, simple_loss=0.2567, pruned_loss=0.06067, over 4771.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2505, pruned_loss=0.05706, over 960149.86 frames. ], batch size: 28, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:26:55,878 INFO [finetune.py:976] (6/7) Epoch 15, batch 3250, loss[loss=0.1991, simple_loss=0.2616, pruned_loss=0.06829, over 4824.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2509, pruned_loss=0.05711, over 961001.65 frames. ], batch size: 30, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:27:00,086 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.648e+01 1.539e+02 1.854e+02 2.232e+02 3.646e+02, threshold=3.708e+02, percent-clipped=1.0 2023-03-26 18:27:00,245 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2436, 2.1232, 1.6171, 2.2599, 2.0653, 1.7336, 2.4960, 2.1824], device='cuda:6'), covar=tensor([0.1313, 0.2073, 0.3128, 0.2393, 0.2481, 0.1689, 0.3027, 0.1696], device='cuda:6'), in_proj_covar=tensor([0.0180, 0.0186, 0.0234, 0.0252, 0.0243, 0.0200, 0.0211, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:27:11,012 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.02 vs. limit=5.0 2023-03-26 18:27:29,591 INFO [finetune.py:976] (6/7) Epoch 15, batch 3300, loss[loss=0.2024, simple_loss=0.276, pruned_loss=0.06438, over 4910.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2554, pruned_loss=0.05899, over 958015.34 frames. ], batch size: 36, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:27:37,352 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 18:27:54,224 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2204, 2.1581, 1.8092, 0.9445, 2.0359, 1.7873, 1.4969, 1.9368], device='cuda:6'), covar=tensor([0.0985, 0.0796, 0.1518, 0.2035, 0.1296, 0.2176, 0.2467, 0.1047], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0193, 0.0198, 0.0183, 0.0212, 0.0205, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:28:07,468 INFO [finetune.py:976] (6/7) Epoch 15, batch 3350, loss[loss=0.1945, simple_loss=0.2737, pruned_loss=0.05766, over 4777.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2568, pruned_loss=0.059, over 957881.12 frames. ], batch size: 26, lr: 3.49e-03, grad_scale: 64.0 2023-03-26 18:28:14,628 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.792e+02 2.041e+02 2.510e+02 5.102e+02, threshold=4.082e+02, percent-clipped=3.0 2023-03-26 18:28:15,392 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0829, 1.9772, 1.8227, 2.1564, 2.5323, 2.0376, 2.0408, 1.7380], device='cuda:6'), covar=tensor([0.1636, 0.1693, 0.1483, 0.1347, 0.1554, 0.1007, 0.1900, 0.1490], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0210, 0.0192, 0.0243, 0.0185, 0.0216, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:28:21,872 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83552.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:28:24,180 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.52 vs. limit=2.0 2023-03-26 18:28:54,247 INFO [finetune.py:976] (6/7) Epoch 15, batch 3400, loss[loss=0.186, simple_loss=0.2595, pruned_loss=0.05629, over 4792.00 frames. ], tot_loss[loss=0.1884, simple_loss=0.2576, pruned_loss=0.05954, over 956289.60 frames. ], batch size: 29, lr: 3.49e-03, grad_scale: 64.0 2023-03-26 18:29:06,739 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 18:29:16,689 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6796, 1.5310, 2.1782, 3.3839, 2.2547, 2.4745, 1.0199, 2.7202], device='cuda:6'), covar=tensor([0.1789, 0.1545, 0.1322, 0.0684, 0.0815, 0.1359, 0.1849, 0.0612], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0133, 0.0165, 0.0101, 0.0138, 0.0124, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:29:24,887 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83619.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:29:31,286 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7168, 1.4394, 1.6395, 1.7560, 1.5720, 3.1297, 1.3226, 1.5506], device='cuda:6'), covar=tensor([0.0829, 0.1719, 0.1247, 0.0935, 0.1535, 0.0266, 0.1515, 0.1615], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0080, 0.0085, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:29:37,311 INFO [finetune.py:976] (6/7) Epoch 15, batch 3450, loss[loss=0.1953, simple_loss=0.2651, pruned_loss=0.06273, over 4813.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2579, pruned_loss=0.05959, over 957045.17 frames. ], batch size: 40, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:29:39,047 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=83640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:29:41,997 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.651e+02 1.928e+02 2.236e+02 3.717e+02, threshold=3.855e+02, percent-clipped=0.0 2023-03-26 18:29:57,379 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83667.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:30:11,003 INFO [finetune.py:976] (6/7) Epoch 15, batch 3500, loss[loss=0.1424, simple_loss=0.2177, pruned_loss=0.0336, over 4839.00 frames. ], tot_loss[loss=0.1862, simple_loss=0.255, pruned_loss=0.05866, over 956420.63 frames. ], batch size: 47, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:30:44,196 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7124, 1.4278, 1.0366, 0.2014, 1.2877, 1.4662, 1.4764, 1.4234], device='cuda:6'), covar=tensor([0.0935, 0.0884, 0.1499, 0.2234, 0.1508, 0.2662, 0.2321, 0.0889], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0194, 0.0198, 0.0183, 0.0212, 0.0205, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:30:44,678 INFO [finetune.py:976] (6/7) Epoch 15, batch 3550, loss[loss=0.1857, simple_loss=0.2489, pruned_loss=0.06126, over 4829.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2522, pruned_loss=0.05798, over 956168.61 frames. ], batch size: 30, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:30:49,415 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.222e+01 1.573e+02 1.880e+02 2.102e+02 4.250e+02, threshold=3.760e+02, percent-clipped=2.0 2023-03-26 18:30:54,399 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1048, 1.0028, 1.0050, 0.3879, 0.8503, 1.1467, 1.1546, 1.0068], device='cuda:6'), covar=tensor([0.0899, 0.0591, 0.0517, 0.0603, 0.0579, 0.0599, 0.0387, 0.0682], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0124, 0.0129, 0.0131, 0.0127, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.2591e-05, 1.1076e-04, 8.9314e-05, 9.2490e-05, 9.2762e-05, 9.1666e-05, 1.0203e-04, 1.0697e-04], device='cuda:6') 2023-03-26 18:30:59,118 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83760.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:31:18,472 INFO [finetune.py:976] (6/7) Epoch 15, batch 3600, loss[loss=0.1487, simple_loss=0.2131, pruned_loss=0.04212, over 4762.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2493, pruned_loss=0.05715, over 956844.26 frames. ], batch size: 54, lr: 3.49e-03, grad_scale: 32.0 2023-03-26 18:31:41,543 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2586, 1.1927, 1.4900, 1.0277, 1.2725, 1.3752, 1.2174, 1.5572], device='cuda:6'), covar=tensor([0.0910, 0.1797, 0.1067, 0.1300, 0.0692, 0.1028, 0.2185, 0.0682], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0204, 0.0192, 0.0190, 0.0176, 0.0212, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:31:48,178 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=83821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:31:59,980 INFO [finetune.py:976] (6/7) Epoch 15, batch 3650, loss[loss=0.2124, simple_loss=0.2798, pruned_loss=0.0725, over 4771.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2521, pruned_loss=0.05885, over 954290.48 frames. ], batch size: 54, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:32:04,775 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.603e+02 1.941e+02 2.306e+02 4.863e+02, threshold=3.882e+02, percent-clipped=1.0 2023-03-26 18:32:09,569 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:32:20,903 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7928, 1.4940, 2.2752, 3.5712, 2.3395, 2.7000, 1.1029, 2.7438], device='cuda:6'), covar=tensor([0.1798, 0.1702, 0.1469, 0.0692, 0.0894, 0.2120, 0.1903, 0.0604], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0132, 0.0163, 0.0100, 0.0137, 0.0124, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:32:28,586 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4802, 1.4990, 1.6114, 1.6085, 1.6948, 2.9636, 1.4573, 1.5972], device='cuda:6'), covar=tensor([0.0978, 0.1570, 0.0963, 0.0910, 0.1392, 0.0317, 0.1263, 0.1582], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:32:33,849 INFO [finetune.py:976] (6/7) Epoch 15, batch 3700, loss[loss=0.1491, simple_loss=0.2157, pruned_loss=0.04127, over 4788.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2553, pruned_loss=0.06004, over 954536.70 frames. ], batch size: 25, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:32:41,109 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3794, 1.2361, 1.2584, 1.2736, 0.8046, 2.2202, 0.7456, 1.1315], device='cuda:6'), covar=tensor([0.3363, 0.2611, 0.2246, 0.2480, 0.1978, 0.0361, 0.2688, 0.1435], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0115, 0.0120, 0.0123, 0.0114, 0.0097, 0.0096, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:32:42,152 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83900.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:01,828 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5000, 1.5285, 2.0046, 2.9200, 1.9415, 2.1702, 1.0022, 2.4045], device='cuda:6'), covar=tensor([0.1727, 0.1322, 0.1115, 0.0533, 0.0810, 0.1381, 0.1671, 0.0539], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0131, 0.0162, 0.0100, 0.0137, 0.0123, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:33:07,589 INFO [finetune.py:976] (6/7) Epoch 15, batch 3750, loss[loss=0.2282, simple_loss=0.2851, pruned_loss=0.08565, over 4887.00 frames. ], tot_loss[loss=0.1902, simple_loss=0.2578, pruned_loss=0.06129, over 954408.87 frames. ], batch size: 43, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:33:08,907 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=83940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:11,800 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.596e+02 1.977e+02 2.275e+02 5.079e+02, threshold=3.955e+02, percent-clipped=1.0 2023-03-26 18:33:14,854 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:53,409 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=83985.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:55,621 INFO [finetune.py:976] (6/7) Epoch 15, batch 3800, loss[loss=0.1961, simple_loss=0.2586, pruned_loss=0.06676, over 4882.00 frames. ], tot_loss[loss=0.1915, simple_loss=0.2592, pruned_loss=0.06192, over 951410.84 frames. ], batch size: 32, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:33:55,682 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=83988.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:33:56,350 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5311, 1.5811, 1.2699, 1.5114, 1.9142, 1.7897, 1.5185, 1.4173], device='cuda:6'), covar=tensor([0.0338, 0.0299, 0.0578, 0.0273, 0.0192, 0.0519, 0.0334, 0.0394], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0108, 0.0140, 0.0112, 0.0099, 0.0106, 0.0097, 0.0107], device='cuda:6'), out_proj_covar=tensor([7.1813e-05, 8.3376e-05, 1.1111e-04, 8.6649e-05, 7.7356e-05, 7.7944e-05, 7.2757e-05, 8.1828e-05], device='cuda:6') 2023-03-26 18:33:59,394 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0734, 2.1453, 1.8141, 2.2427, 1.9970, 2.0302, 2.0426, 2.8312], device='cuda:6'), covar=tensor([0.4234, 0.5134, 0.3456, 0.4777, 0.4828, 0.2430, 0.4553, 0.1763], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0258, 0.0225, 0.0273, 0.0245, 0.0213, 0.0247, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:34:11,217 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84010.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:34:36,933 INFO [finetune.py:976] (6/7) Epoch 15, batch 3850, loss[loss=0.1569, simple_loss=0.2209, pruned_loss=0.04645, over 4223.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2577, pruned_loss=0.06074, over 952384.75 frames. ], batch size: 65, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:34:41,716 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.557e+01 1.496e+02 1.862e+02 2.338e+02 3.560e+02, threshold=3.724e+02, percent-clipped=0.0 2023-03-26 18:34:42,445 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84046.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:34:47,173 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3560, 2.0128, 2.3444, 2.3181, 2.0049, 2.0440, 2.2094, 2.1814], device='cuda:6'), covar=tensor([0.3767, 0.4366, 0.3286, 0.4077, 0.5297, 0.3833, 0.5010, 0.3253], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0239, 0.0257, 0.0268, 0.0267, 0.0240, 0.0280, 0.0236], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:35:10,033 INFO [finetune.py:976] (6/7) Epoch 15, batch 3900, loss[loss=0.1589, simple_loss=0.2294, pruned_loss=0.04415, over 4792.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.255, pruned_loss=0.06009, over 953430.22 frames. ], batch size: 26, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:35:16,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84097.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:35:28,922 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84116.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:35:43,581 INFO [finetune.py:976] (6/7) Epoch 15, batch 3950, loss[loss=0.1624, simple_loss=0.2233, pruned_loss=0.05072, over 4752.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2508, pruned_loss=0.05819, over 954627.16 frames. ], batch size: 59, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:35:47,770 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.469e+02 1.885e+02 2.278e+02 4.120e+02, threshold=3.770e+02, percent-clipped=1.0 2023-03-26 18:35:57,239 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:36:16,808 INFO [finetune.py:976] (6/7) Epoch 15, batch 4000, loss[loss=0.2545, simple_loss=0.307, pruned_loss=0.101, over 4260.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2494, pruned_loss=0.05791, over 952489.86 frames. ], batch size: 65, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:36:24,511 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:36:29,263 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1358, 2.1961, 1.9296, 2.3331, 2.8965, 2.2802, 2.1459, 1.6912], device='cuda:6'), covar=tensor([0.2415, 0.2108, 0.2046, 0.1746, 0.1846, 0.1174, 0.2227, 0.2050], device='cuda:6'), in_proj_covar=tensor([0.0238, 0.0205, 0.0207, 0.0189, 0.0239, 0.0182, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:36:37,136 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 18:36:39,308 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.12 vs. limit=2.0 2023-03-26 18:36:58,146 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:36:59,238 INFO [finetune.py:976] (6/7) Epoch 15, batch 4050, loss[loss=0.2289, simple_loss=0.2984, pruned_loss=0.07964, over 4800.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2519, pruned_loss=0.05892, over 953408.28 frames. ], batch size: 45, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:37:07,829 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.586e+02 1.909e+02 2.268e+02 5.729e+02, threshold=3.818e+02, percent-clipped=2.0 2023-03-26 18:37:15,031 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8683, 1.3001, 0.8825, 1.8180, 2.2089, 1.4317, 1.6273, 1.6471], device='cuda:6'), covar=tensor([0.1496, 0.2096, 0.1970, 0.1122, 0.1827, 0.2042, 0.1456, 0.2049], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0118, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:37:21,561 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:37:32,266 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0304, 1.4521, 0.8499, 1.9334, 2.3287, 1.6457, 1.8910, 1.9263], device='cuda:6'), covar=tensor([0.1255, 0.1916, 0.2087, 0.1087, 0.1676, 0.1855, 0.1266, 0.1766], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:37:39,952 INFO [finetune.py:976] (6/7) Epoch 15, batch 4100, loss[loss=0.2048, simple_loss=0.2768, pruned_loss=0.06642, over 4811.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2545, pruned_loss=0.05954, over 953923.80 frames. ], batch size: 38, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:37:44,004 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7085, 1.1848, 0.8351, 1.6329, 1.9808, 1.4042, 1.4390, 1.6054], device='cuda:6'), covar=tensor([0.1599, 0.2221, 0.2177, 0.1268, 0.2045, 0.2298, 0.1532, 0.1973], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:37:46,470 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84297.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:37:52,176 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84305.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:38:09,237 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 18:38:11,868 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:38:13,433 INFO [finetune.py:976] (6/7) Epoch 15, batch 4150, loss[loss=0.2205, simple_loss=0.2898, pruned_loss=0.07564, over 4888.00 frames. ], tot_loss[loss=0.1896, simple_loss=0.2572, pruned_loss=0.06102, over 951107.61 frames. ], batch size: 35, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:38:15,806 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84341.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:38:18,114 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.618e+02 1.997e+02 2.307e+02 7.274e+02, threshold=3.993e+02, percent-clipped=3.0 2023-03-26 18:38:50,146 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 18:38:50,308 INFO [finetune.py:976] (6/7) Epoch 15, batch 4200, loss[loss=0.1511, simple_loss=0.2143, pruned_loss=0.04392, over 4730.00 frames. ], tot_loss[loss=0.1905, simple_loss=0.2582, pruned_loss=0.06136, over 952391.73 frames. ], batch size: 59, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:39:04,803 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84397.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:39:17,848 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84416.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:39:31,950 INFO [finetune.py:976] (6/7) Epoch 15, batch 4250, loss[loss=0.1497, simple_loss=0.2189, pruned_loss=0.04029, over 4844.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2562, pruned_loss=0.06063, over 953686.38 frames. ], batch size: 25, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:39:32,955 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.72 vs. limit=5.0 2023-03-26 18:39:36,666 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.324e+01 1.559e+02 1.825e+02 2.300e+02 4.289e+02, threshold=3.650e+02, percent-clipped=1.0 2023-03-26 18:39:47,480 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84453.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:39:58,790 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84464.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:40:14,296 INFO [finetune.py:976] (6/7) Epoch 15, batch 4300, loss[loss=0.1847, simple_loss=0.2442, pruned_loss=0.06263, over 4912.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.2517, pruned_loss=0.05874, over 953278.57 frames. ], batch size: 37, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:40:39,116 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84525.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:40:44,791 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.48 vs. limit=5.0 2023-03-26 18:40:47,840 INFO [finetune.py:976] (6/7) Epoch 15, batch 4350, loss[loss=0.1848, simple_loss=0.2434, pruned_loss=0.06309, over 4070.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2493, pruned_loss=0.05769, over 954832.65 frames. ], batch size: 17, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:40:52,220 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.502e+02 1.820e+02 2.196e+02 3.984e+02, threshold=3.641e+02, percent-clipped=2.0 2023-03-26 18:40:58,859 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:05,940 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6322, 1.6063, 1.3930, 1.6158, 2.0193, 1.8402, 1.7114, 1.5008], device='cuda:6'), covar=tensor([0.0286, 0.0331, 0.0559, 0.0280, 0.0174, 0.0583, 0.0255, 0.0390], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0109, 0.0142, 0.0113, 0.0100, 0.0107, 0.0098, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2500e-05, 8.4353e-05, 1.1258e-04, 8.7607e-05, 7.8180e-05, 7.8948e-05, 7.3349e-05, 8.2286e-05], device='cuda:6') 2023-03-26 18:41:19,643 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:21,099 INFO [finetune.py:976] (6/7) Epoch 15, batch 4400, loss[loss=0.1846, simple_loss=0.2614, pruned_loss=0.0539, over 4825.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2505, pruned_loss=0.05781, over 955621.47 frames. ], batch size: 40, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:41:24,146 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:32,792 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84605.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:51,855 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84633.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:41:54,833 INFO [finetune.py:976] (6/7) Epoch 15, batch 4450, loss[loss=0.2048, simple_loss=0.2764, pruned_loss=0.06663, over 4737.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2532, pruned_loss=0.05861, over 953574.81 frames. ], batch size: 54, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:41:57,727 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:01,981 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.595e+02 1.952e+02 2.292e+02 4.719e+02, threshold=3.904e+02, percent-clipped=1.0 2023-03-26 18:42:11,295 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84653.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:46,433 INFO [finetune.py:976] (6/7) Epoch 15, batch 4500, loss[loss=0.1882, simple_loss=0.2577, pruned_loss=0.05936, over 4790.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2549, pruned_loss=0.0587, over 955940.51 frames. ], batch size: 29, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:42:47,110 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84689.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:49,435 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:42:51,237 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84694.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:43:20,116 INFO [finetune.py:976] (6/7) Epoch 15, batch 4550, loss[loss=0.2259, simple_loss=0.2917, pruned_loss=0.08003, over 4814.00 frames. ], tot_loss[loss=0.1885, simple_loss=0.2573, pruned_loss=0.05985, over 955723.01 frames. ], batch size: 39, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:43:25,281 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.592e+02 2.005e+02 2.406e+02 4.528e+02, threshold=4.009e+02, percent-clipped=3.0 2023-03-26 18:43:30,232 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84753.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:43:44,006 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 18:43:53,701 INFO [finetune.py:976] (6/7) Epoch 15, batch 4600, loss[loss=0.1655, simple_loss=0.2427, pruned_loss=0.04413, over 4811.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2564, pruned_loss=0.05903, over 957315.13 frames. ], batch size: 41, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:44:06,848 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84800.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:44:07,398 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:44:21,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 18:44:36,216 INFO [finetune.py:976] (6/7) Epoch 15, batch 4650, loss[loss=0.1955, simple_loss=0.2717, pruned_loss=0.05965, over 4864.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2527, pruned_loss=0.05782, over 956802.28 frames. ], batch size: 34, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:44:40,390 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.584e+02 1.933e+02 2.372e+02 3.946e+02, threshold=3.865e+02, percent-clipped=0.0 2023-03-26 18:44:41,099 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9538, 1.3803, 0.8443, 1.8686, 2.2268, 1.5908, 1.6702, 1.9703], device='cuda:6'), covar=tensor([0.1244, 0.1708, 0.1885, 0.1035, 0.1696, 0.1799, 0.1223, 0.1574], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0093, 0.0118, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:44:47,567 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84855.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:44:56,241 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=84861.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:17,970 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84881.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:22,918 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7201, 1.6344, 1.5918, 1.7020, 1.1797, 3.6841, 1.4732, 1.9112], device='cuda:6'), covar=tensor([0.3519, 0.2656, 0.2118, 0.2402, 0.1885, 0.0165, 0.2456, 0.1316], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:45:25,694 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-26 18:45:26,374 INFO [finetune.py:976] (6/7) Epoch 15, batch 4700, loss[loss=0.1499, simple_loss=0.2144, pruned_loss=0.04269, over 4769.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2501, pruned_loss=0.05711, over 958307.36 frames. ], batch size: 26, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:45:29,346 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:36,526 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84903.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:45:39,795 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 18:45:46,244 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7728, 1.7417, 1.5853, 2.0340, 2.2523, 1.9761, 1.4150, 1.4819], device='cuda:6'), covar=tensor([0.2151, 0.1951, 0.1852, 0.1510, 0.1729, 0.1118, 0.2505, 0.1893], device='cuda:6'), in_proj_covar=tensor([0.0239, 0.0206, 0.0209, 0.0191, 0.0240, 0.0184, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:45:59,760 INFO [finetune.py:976] (6/7) Epoch 15, batch 4750, loss[loss=0.1741, simple_loss=0.2336, pruned_loss=0.0573, over 4902.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2483, pruned_loss=0.05659, over 958230.27 frames. ], batch size: 32, lr: 3.48e-03, grad_scale: 32.0 2023-03-26 18:46:01,519 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=84940.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:04,959 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.651e+02 1.892e+02 2.436e+02 4.596e+02, threshold=3.784e+02, percent-clipped=1.0 2023-03-26 18:46:08,170 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2684, 2.2370, 2.3810, 1.7489, 2.3020, 2.4276, 2.4669, 1.9005], device='cuda:6'), covar=tensor([0.0596, 0.0611, 0.0639, 0.0841, 0.0558, 0.0630, 0.0565, 0.1118], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0135, 0.0142, 0.0123, 0.0124, 0.0141, 0.0143, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:46:08,760 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=84951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:33,701 INFO [finetune.py:976] (6/7) Epoch 15, batch 4800, loss[loss=0.165, simple_loss=0.2511, pruned_loss=0.03947, over 4788.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2517, pruned_loss=0.05752, over 957985.46 frames. ], batch size: 29, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:46:34,869 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=84989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:36,727 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=84992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:46:50,492 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85012.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:47:05,172 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.54 vs. limit=2.0 2023-03-26 18:47:05,824 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6022, 3.4826, 3.2934, 1.4851, 3.5949, 2.6452, 0.7438, 2.4491], device='cuda:6'), covar=tensor([0.2216, 0.1892, 0.1442, 0.3520, 0.1049, 0.1070, 0.4444, 0.1463], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0174, 0.0159, 0.0128, 0.0158, 0.0123, 0.0145, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 18:47:07,604 INFO [finetune.py:976] (6/7) Epoch 15, batch 4850, loss[loss=0.1721, simple_loss=0.2509, pruned_loss=0.04667, over 4852.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2555, pruned_loss=0.05852, over 957783.14 frames. ], batch size: 44, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:47:09,346 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:47:10,612 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5262, 1.4303, 1.3777, 1.4998, 1.0609, 3.2581, 1.3087, 1.7430], device='cuda:6'), covar=tensor([0.3597, 0.2787, 0.2347, 0.2481, 0.1995, 0.0237, 0.2527, 0.1330], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:47:10,635 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5369, 1.4347, 1.2846, 1.4831, 1.7798, 1.6498, 1.4508, 1.2334], device='cuda:6'), covar=tensor([0.0282, 0.0316, 0.0605, 0.0265, 0.0204, 0.0425, 0.0347, 0.0396], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0109, 0.0142, 0.0113, 0.0100, 0.0107, 0.0098, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2340e-05, 8.4192e-05, 1.1254e-04, 8.7141e-05, 7.7741e-05, 7.8859e-05, 7.3435e-05, 8.2586e-05], device='cuda:6') 2023-03-26 18:47:12,315 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.585e+02 1.858e+02 2.141e+02 6.123e+02, threshold=3.716e+02, percent-clipped=1.0 2023-03-26 18:47:19,330 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 18:47:20,388 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7513, 0.7591, 1.7118, 1.6387, 1.5442, 1.4899, 1.5512, 1.6395], device='cuda:6'), covar=tensor([0.3352, 0.3944, 0.3519, 0.3354, 0.4498, 0.3415, 0.4150, 0.3193], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0239, 0.0258, 0.0269, 0.0268, 0.0241, 0.0280, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:47:25,968 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 18:47:50,158 INFO [finetune.py:976] (6/7) Epoch 15, batch 4900, loss[loss=0.1902, simple_loss=0.2629, pruned_loss=0.05871, over 4843.00 frames. ], tot_loss[loss=0.1889, simple_loss=0.2577, pruned_loss=0.06003, over 957729.05 frames. ], batch size: 49, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:47:53,465 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-26 18:47:59,543 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2027, 1.3121, 1.5725, 0.9907, 1.2251, 1.4178, 1.3019, 1.6156], device='cuda:6'), covar=tensor([0.1347, 0.1982, 0.1328, 0.1585, 0.1003, 0.1360, 0.2721, 0.0880], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0203, 0.0192, 0.0190, 0.0176, 0.0213, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:48:06,023 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85107.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:11,340 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85115.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:11,961 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4164, 1.2705, 1.3281, 1.3145, 0.9172, 2.2102, 0.7756, 1.1326], device='cuda:6'), covar=tensor([0.3190, 0.2498, 0.2072, 0.2325, 0.1831, 0.0343, 0.2578, 0.1358], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0115, 0.0097, 0.0096, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:48:26,708 INFO [finetune.py:976] (6/7) Epoch 15, batch 4950, loss[loss=0.208, simple_loss=0.2782, pruned_loss=0.06889, over 4821.00 frames. ], tot_loss[loss=0.1899, simple_loss=0.2591, pruned_loss=0.06033, over 957850.09 frames. ], batch size: 39, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:48:31,436 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.624e+02 1.884e+02 2.194e+02 3.725e+02, threshold=3.769e+02, percent-clipped=1.0 2023-03-26 18:48:39,186 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85156.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:47,008 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85168.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 18:48:52,399 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:48:55,807 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85181.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:49:00,448 INFO [finetune.py:976] (6/7) Epoch 15, batch 5000, loss[loss=0.1966, simple_loss=0.2527, pruned_loss=0.07028, over 3986.00 frames. ], tot_loss[loss=0.188, simple_loss=0.2571, pruned_loss=0.05949, over 955972.89 frames. ], batch size: 17, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:49:26,011 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 18:49:36,187 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:49:42,103 INFO [finetune.py:976] (6/7) Epoch 15, batch 5050, loss[loss=0.145, simple_loss=0.2184, pruned_loss=0.03577, over 4784.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2541, pruned_loss=0.0589, over 956426.34 frames. ], batch size: 26, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:49:46,816 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.593e+02 1.872e+02 2.269e+02 5.264e+02, threshold=3.745e+02, percent-clipped=1.0 2023-03-26 18:49:48,317 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 18:50:22,634 INFO [finetune.py:976] (6/7) Epoch 15, batch 5100, loss[loss=0.1538, simple_loss=0.2238, pruned_loss=0.04195, over 4801.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2501, pruned_loss=0.05731, over 955458.09 frames. ], batch size: 29, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:50:23,313 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85289.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:50:42,801 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85307.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:50:57,641 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 18:51:02,802 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:51:03,346 INFO [finetune.py:976] (6/7) Epoch 15, batch 5150, loss[loss=0.2183, simple_loss=0.2784, pruned_loss=0.07908, over 4825.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2497, pruned_loss=0.05749, over 952675.31 frames. ], batch size: 40, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:51:08,103 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.937e+01 1.526e+02 1.888e+02 2.256e+02 3.382e+02, threshold=3.776e+02, percent-clipped=0.0 2023-03-26 18:51:20,301 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 18:51:29,577 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0557, 1.9385, 1.5333, 1.8972, 1.9334, 1.6732, 2.2888, 2.0417], device='cuda:6'), covar=tensor([0.1394, 0.2214, 0.3217, 0.2792, 0.2813, 0.1844, 0.3541, 0.1804], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0188, 0.0233, 0.0253, 0.0245, 0.0201, 0.0212, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:51:37,049 INFO [finetune.py:976] (6/7) Epoch 15, batch 5200, loss[loss=0.2066, simple_loss=0.2617, pruned_loss=0.0757, over 4839.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2541, pruned_loss=0.05937, over 952744.34 frames. ], batch size: 25, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:51:47,121 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 18:51:54,950 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 18:52:04,187 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85428.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:10,614 INFO [finetune.py:976] (6/7) Epoch 15, batch 5250, loss[loss=0.1822, simple_loss=0.2511, pruned_loss=0.05668, over 4801.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2553, pruned_loss=0.05951, over 952544.56 frames. ], batch size: 51, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:52:15,820 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.657e+02 1.928e+02 2.523e+02 8.274e+02, threshold=3.856e+02, percent-clipped=2.0 2023-03-26 18:52:23,072 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:27,690 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85463.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:52:33,031 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:40,800 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85483.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:44,237 INFO [finetune.py:976] (6/7) Epoch 15, batch 5300, loss[loss=0.1522, simple_loss=0.2225, pruned_loss=0.04089, over 4766.00 frames. ], tot_loss[loss=0.1892, simple_loss=0.2578, pruned_loss=0.06031, over 955062.12 frames. ], batch size: 27, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:52:44,955 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85489.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:52:54,980 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85504.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:53:14,957 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4224, 2.6393, 2.4337, 1.9055, 2.3535, 2.7635, 2.6782, 2.2240], device='cuda:6'), covar=tensor([0.0592, 0.0504, 0.0732, 0.0890, 0.0932, 0.0727, 0.0620, 0.0934], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0141, 0.0122, 0.0123, 0.0140, 0.0141, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:53:19,675 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85530.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:53:24,905 INFO [finetune.py:976] (6/7) Epoch 15, batch 5350, loss[loss=0.1702, simple_loss=0.2494, pruned_loss=0.04554, over 4840.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2573, pruned_loss=0.06014, over 954642.22 frames. ], batch size: 44, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:53:28,695 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85544.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:53:29,183 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.542e+02 1.806e+02 2.197e+02 4.190e+02, threshold=3.613e+02, percent-clipped=2.0 2023-03-26 18:53:30,300 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-26 18:53:54,512 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9189, 1.4952, 0.8997, 1.7422, 2.2265, 1.4169, 1.6471, 1.7681], device='cuda:6'), covar=tensor([0.1343, 0.1926, 0.1775, 0.1170, 0.1685, 0.1798, 0.1343, 0.1842], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:53:58,023 INFO [finetune.py:976] (6/7) Epoch 15, batch 5400, loss[loss=0.2174, simple_loss=0.2766, pruned_loss=0.07914, over 4922.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2552, pruned_loss=0.05932, over 952782.35 frames. ], batch size: 33, lr: 3.47e-03, grad_scale: 32.0 2023-03-26 18:54:00,451 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85591.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:54:09,512 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-26 18:54:11,212 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85607.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:54:23,544 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85625.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:54:31,765 INFO [finetune.py:976] (6/7) Epoch 15, batch 5450, loss[loss=0.17, simple_loss=0.2398, pruned_loss=0.05004, over 4913.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.252, pruned_loss=0.05827, over 952272.25 frames. ], batch size: 36, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:54:41,084 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.884e+01 1.516e+02 1.902e+02 2.390e+02 5.288e+02, threshold=3.804e+02, percent-clipped=4.0 2023-03-26 18:54:52,064 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85655.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:55:05,690 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5717, 1.4749, 1.4068, 1.4576, 1.0024, 3.0265, 1.1013, 1.4793], device='cuda:6'), covar=tensor([0.4131, 0.3106, 0.2379, 0.2911, 0.1931, 0.0342, 0.2672, 0.1315], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0119, 0.0123, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:55:06,915 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2239, 2.0408, 1.7173, 1.9015, 1.8914, 1.9260, 1.9242, 2.6967], device='cuda:6'), covar=tensor([0.3987, 0.4178, 0.3473, 0.3811, 0.3944, 0.2565, 0.3807, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0283, 0.0258, 0.0224, 0.0273, 0.0245, 0.0213, 0.0247, 0.0224], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:55:12,799 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7837, 2.9433, 2.6977, 2.2615, 2.9289, 3.1777, 3.0492, 2.4357], device='cuda:6'), covar=tensor([0.0597, 0.0592, 0.0720, 0.0799, 0.0551, 0.0648, 0.0562, 0.1043], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0140, 0.0122, 0.0123, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:55:16,888 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85686.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 18:55:17,976 INFO [finetune.py:976] (6/7) Epoch 15, batch 5500, loss[loss=0.1656, simple_loss=0.2309, pruned_loss=0.05012, over 4932.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2485, pruned_loss=0.05744, over 952759.11 frames. ], batch size: 38, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:55:27,830 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 18:56:02,560 INFO [finetune.py:976] (6/7) Epoch 15, batch 5550, loss[loss=0.1943, simple_loss=0.2596, pruned_loss=0.06449, over 4926.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2501, pruned_loss=0.05809, over 954233.10 frames. ], batch size: 38, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:56:06,722 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.623e+01 1.589e+02 1.875e+02 2.150e+02 4.153e+02, threshold=3.750e+02, percent-clipped=1.0 2023-03-26 18:56:17,602 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3729, 2.0530, 2.7605, 1.6039, 2.3746, 2.5512, 1.9262, 2.8255], device='cuda:6'), covar=tensor([0.1535, 0.2091, 0.1635, 0.2566, 0.0998, 0.1679, 0.2772, 0.0860], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0205, 0.0193, 0.0191, 0.0177, 0.0213, 0.0218, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:56:19,299 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:24,635 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=85771.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:28,937 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 18:56:32,627 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85784.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:34,953 INFO [finetune.py:976] (6/7) Epoch 15, batch 5600, loss[loss=0.2007, simple_loss=0.2735, pruned_loss=0.06399, over 4902.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2538, pruned_loss=0.05903, over 954084.91 frames. ], batch size: 35, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:56:48,430 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:53,121 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=85819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:56:55,444 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1868, 2.0077, 2.4184, 4.1751, 3.0105, 2.7043, 1.1070, 3.3864], device='cuda:6'), covar=tensor([0.1635, 0.1384, 0.1486, 0.0590, 0.0706, 0.1524, 0.1863, 0.0467], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0101, 0.0139, 0.0125, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:57:04,158 INFO [finetune.py:976] (6/7) Epoch 15, batch 5650, loss[loss=0.2162, simple_loss=0.2841, pruned_loss=0.07421, over 4874.00 frames. ], tot_loss[loss=0.1882, simple_loss=0.2565, pruned_loss=0.05999, over 952958.76 frames. ], batch size: 34, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:57:04,773 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85839.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:57:08,244 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.126e+02 1.563e+02 1.888e+02 2.328e+02 3.522e+02, threshold=3.776e+02, percent-clipped=0.0 2023-03-26 18:57:26,031 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85875.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:57:32,568 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:57:33,720 INFO [finetune.py:976] (6/7) Epoch 15, batch 5700, loss[loss=0.1334, simple_loss=0.1946, pruned_loss=0.03615, over 4361.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2526, pruned_loss=0.05872, over 938783.65 frames. ], batch size: 19, lr: 3.47e-03, grad_scale: 64.0 2023-03-26 18:58:02,783 INFO [finetune.py:976] (6/7) Epoch 16, batch 0, loss[loss=0.1764, simple_loss=0.2359, pruned_loss=0.05845, over 4516.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2359, pruned_loss=0.05845, over 4516.00 frames. ], batch size: 19, lr: 3.46e-03, grad_scale: 64.0 2023-03-26 18:58:02,783 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 18:58:12,094 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8435, 1.3761, 0.9490, 1.6162, 2.1462, 1.1497, 1.6311, 1.6510], device='cuda:6'), covar=tensor([0.1379, 0.1921, 0.1784, 0.1214, 0.1787, 0.2025, 0.1319, 0.1944], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 18:58:17,920 INFO [finetune.py:1010] (6/7) Epoch 16, validation: loss=0.1572, simple_loss=0.2278, pruned_loss=0.04329, over 2265189.00 frames. 2023-03-26 18:58:17,921 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 18:58:22,812 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5818, 1.4982, 1.3363, 1.5946, 1.9224, 1.7776, 1.5045, 1.3689], device='cuda:6'), covar=tensor([0.0283, 0.0308, 0.0623, 0.0294, 0.0192, 0.0438, 0.0301, 0.0394], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0109, 0.0143, 0.0113, 0.0100, 0.0107, 0.0097, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.2658e-05, 8.4242e-05, 1.1341e-04, 8.7361e-05, 7.8285e-05, 7.8926e-05, 7.3140e-05, 8.2796e-05], device='cuda:6') 2023-03-26 18:58:26,611 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 18:58:26,967 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85930.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:58:29,349 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6606, 1.5692, 1.4912, 1.5741, 1.0667, 3.3443, 1.4020, 1.7906], device='cuda:6'), covar=tensor([0.2987, 0.2202, 0.1896, 0.2165, 0.1700, 0.0214, 0.2471, 0.1172], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0114, 0.0118, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 18:58:30,585 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:58:36,426 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.566e+02 1.783e+02 2.274e+02 8.459e+02, threshold=3.567e+02, percent-clipped=4.0 2023-03-26 18:58:40,717 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4234, 2.4063, 1.7750, 2.7110, 2.4636, 2.0001, 3.0543, 2.4350], device='cuda:6'), covar=tensor([0.1307, 0.2363, 0.3216, 0.2498, 0.2516, 0.1681, 0.3140, 0.1913], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0188, 0.0234, 0.0254, 0.0244, 0.0202, 0.0212, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 18:58:44,322 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=85957.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:58:49,655 INFO [finetune.py:976] (6/7) Epoch 16, batch 50, loss[loss=0.1794, simple_loss=0.2484, pruned_loss=0.05519, over 4885.00 frames. ], tot_loss[loss=0.19, simple_loss=0.259, pruned_loss=0.06049, over 216239.19 frames. ], batch size: 32, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 18:58:59,779 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=85981.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 18:59:05,875 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=85991.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 18:59:14,742 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86002.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:59:17,328 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 18:59:23,676 INFO [finetune.py:976] (6/7) Epoch 16, batch 100, loss[loss=0.1852, simple_loss=0.2419, pruned_loss=0.06427, over 4734.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2516, pruned_loss=0.05794, over 379628.76 frames. ], batch size: 59, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 18:59:25,458 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86018.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 18:59:44,131 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.611e+02 1.877e+02 2.147e+02 3.763e+02, threshold=3.754e+02, percent-clipped=3.0 2023-03-26 18:59:47,924 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:00,128 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86063.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:06,620 INFO [finetune.py:976] (6/7) Epoch 16, batch 150, loss[loss=0.1891, simple_loss=0.2442, pruned_loss=0.06699, over 4913.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2473, pruned_loss=0.05766, over 508808.71 frames. ], batch size: 43, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:00:08,579 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86069.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:17,304 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.70 vs. limit=2.0 2023-03-26 19:00:27,245 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86084.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:37,232 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4807, 1.4695, 1.9339, 1.8899, 1.5662, 3.4862, 1.3154, 1.5612], device='cuda:6'), covar=tensor([0.0941, 0.1833, 0.1160, 0.0935, 0.1680, 0.0241, 0.1561, 0.1731], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0078, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:00:50,273 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86113.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:00:51,949 INFO [finetune.py:976] (6/7) Epoch 16, batch 200, loss[loss=0.1377, simple_loss=0.2, pruned_loss=0.03765, over 4766.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2455, pruned_loss=0.05625, over 609962.77 frames. ], batch size: 27, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:01:04,021 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:05,148 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86132.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:09,832 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86139.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:14,012 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.565e+02 1.801e+02 2.285e+02 3.660e+02, threshold=3.601e+02, percent-clipped=0.0 2023-03-26 19:01:27,191 INFO [finetune.py:976] (6/7) Epoch 16, batch 250, loss[loss=0.1844, simple_loss=0.2528, pruned_loss=0.05805, over 4903.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2491, pruned_loss=0.05689, over 688051.68 frames. ], batch size: 43, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:01:34,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4676, 1.3463, 1.4173, 0.8260, 1.5131, 1.4616, 1.4487, 1.2751], device='cuda:6'), covar=tensor([0.0656, 0.0832, 0.0770, 0.1050, 0.0808, 0.0731, 0.0641, 0.1301], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0140, 0.0122, 0.0123, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:01:40,817 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86186.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:41,402 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86187.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:01:45,506 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8845, 3.4162, 3.5878, 3.7571, 3.6330, 3.4118, 3.9519, 1.2864], device='cuda:6'), covar=tensor([0.0959, 0.0935, 0.0921, 0.1105, 0.1458, 0.1534, 0.0916, 0.5605], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0243, 0.0274, 0.0291, 0.0332, 0.0281, 0.0298, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:02:00,656 INFO [finetune.py:976] (6/7) Epoch 16, batch 300, loss[loss=0.1843, simple_loss=0.2665, pruned_loss=0.05102, over 4820.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2547, pruned_loss=0.05876, over 747121.56 frames. ], batch size: 40, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:02:06,690 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1043, 2.0964, 1.6575, 2.0447, 1.8448, 1.8873, 1.9440, 2.7218], device='cuda:6'), covar=tensor([0.3777, 0.4488, 0.3375, 0.3986, 0.4116, 0.2326, 0.3745, 0.1696], device='cuda:6'), in_proj_covar=tensor([0.0283, 0.0259, 0.0225, 0.0273, 0.0246, 0.0213, 0.0247, 0.0225], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:02:11,160 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86231.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:12,982 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:18,107 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 19:02:20,698 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.190e+02 1.625e+02 1.967e+02 2.251e+02 5.649e+02, threshold=3.935e+02, percent-clipped=3.0 2023-03-26 19:02:20,818 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:34,278 INFO [finetune.py:976] (6/7) Epoch 16, batch 350, loss[loss=0.2205, simple_loss=0.2929, pruned_loss=0.07404, over 4792.00 frames. ], tot_loss[loss=0.1888, simple_loss=0.2573, pruned_loss=0.06021, over 792450.21 frames. ], batch size: 51, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:02:44,494 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:02:47,964 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86286.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:03:01,144 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86307.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:05,014 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-26 19:03:05,209 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86313.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:07,485 INFO [finetune.py:976] (6/7) Epoch 16, batch 400, loss[loss=0.1752, simple_loss=0.2528, pruned_loss=0.04874, over 4898.00 frames. ], tot_loss[loss=0.1893, simple_loss=0.2587, pruned_loss=0.05992, over 830936.06 frames. ], batch size: 46, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:03:15,906 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:34,373 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.307e+01 1.538e+02 1.774e+02 2.172e+02 4.200e+02, threshold=3.548e+02, percent-clipped=1.0 2023-03-26 19:03:45,094 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86358.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:03:50,328 INFO [finetune.py:976] (6/7) Epoch 16, batch 450, loss[loss=0.2102, simple_loss=0.2704, pruned_loss=0.075, over 4859.00 frames. ], tot_loss[loss=0.1879, simple_loss=0.2575, pruned_loss=0.05917, over 859813.82 frames. ], batch size: 34, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:04:03,732 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5706, 1.4322, 1.3085, 1.5957, 1.5458, 1.6022, 1.0407, 1.3673], device='cuda:6'), covar=tensor([0.2327, 0.2195, 0.2138, 0.1798, 0.1739, 0.1279, 0.2534, 0.2079], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0207, 0.0211, 0.0191, 0.0242, 0.0184, 0.0215, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:04:18,723 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:24,034 INFO [finetune.py:976] (6/7) Epoch 16, batch 500, loss[loss=0.1727, simple_loss=0.2344, pruned_loss=0.05551, over 4430.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.255, pruned_loss=0.05875, over 881503.88 frames. ], batch size: 19, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:04:30,029 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:40,616 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86440.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:43,627 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86445.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:04:44,071 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.779e+01 1.556e+02 1.875e+02 2.226e+02 4.465e+02, threshold=3.750e+02, percent-clipped=2.0 2023-03-26 19:04:57,163 INFO [finetune.py:976] (6/7) Epoch 16, batch 550, loss[loss=0.1694, simple_loss=0.2367, pruned_loss=0.05106, over 4928.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2516, pruned_loss=0.0576, over 899300.51 frames. ], batch size: 33, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:05:31,554 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86501.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:05:39,633 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86506.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:05:49,703 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6368, 1.5017, 2.1586, 3.1710, 2.2745, 2.2605, 1.0870, 2.5699], device='cuda:6'), covar=tensor([0.1617, 0.1394, 0.1118, 0.0515, 0.0733, 0.1602, 0.1607, 0.0524], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0138, 0.0123, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:05:49,760 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0163, 1.9011, 1.5738, 1.8563, 1.7297, 1.7793, 1.7862, 2.5077], device='cuda:6'), covar=tensor([0.3624, 0.4116, 0.3262, 0.3702, 0.4125, 0.2355, 0.3801, 0.1592], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0262, 0.0227, 0.0276, 0.0249, 0.0215, 0.0250, 0.0227], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:05:50,205 INFO [finetune.py:976] (6/7) Epoch 16, batch 600, loss[loss=0.2028, simple_loss=0.2735, pruned_loss=0.06602, over 4742.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2515, pruned_loss=0.05781, over 911795.56 frames. ], batch size: 54, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:06:04,171 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86531.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:14,604 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.575e+02 1.922e+02 2.222e+02 3.111e+02, threshold=3.844e+02, percent-clipped=0.0 2023-03-26 19:06:27,337 INFO [finetune.py:976] (6/7) Epoch 16, batch 650, loss[loss=0.1844, simple_loss=0.264, pruned_loss=0.05236, over 4902.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.255, pruned_loss=0.05922, over 920840.60 frames. ], batch size: 43, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:06:36,234 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86579.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:37,381 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86580.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:41,058 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86586.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:06:52,269 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86602.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:06:59,455 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86613.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:01,142 INFO [finetune.py:976] (6/7) Epoch 16, batch 700, loss[loss=0.1778, simple_loss=0.2541, pruned_loss=0.05074, over 4249.00 frames. ], tot_loss[loss=0.1875, simple_loss=0.2565, pruned_loss=0.05924, over 928933.95 frames. ], batch size: 65, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:07:13,495 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86634.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:07:17,787 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:21,644 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.513e+02 1.866e+02 2.326e+02 3.823e+02, threshold=3.732e+02, percent-clipped=0.0 2023-03-26 19:07:29,571 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86658.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:31,332 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86661.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:07:34,797 INFO [finetune.py:976] (6/7) Epoch 16, batch 750, loss[loss=0.198, simple_loss=0.2872, pruned_loss=0.05441, over 4891.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2565, pruned_loss=0.05894, over 933224.03 frames. ], batch size: 43, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:08:02,110 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86706.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:03,362 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86708.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:08,618 INFO [finetune.py:976] (6/7) Epoch 16, batch 800, loss[loss=0.1596, simple_loss=0.2263, pruned_loss=0.0464, over 4900.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2562, pruned_loss=0.05871, over 938142.91 frames. ], batch size: 46, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:08:14,186 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86725.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:17,071 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4706, 1.3398, 1.9696, 2.9325, 1.9109, 2.2491, 0.8425, 2.3865], device='cuda:6'), covar=tensor([0.1927, 0.1624, 0.1330, 0.0685, 0.0940, 0.1260, 0.2049, 0.0673], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0138, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:08:22,327 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9356, 1.7001, 2.1421, 1.4924, 1.9243, 2.1722, 1.6636, 2.2561], device='cuda:6'), covar=tensor([0.1322, 0.2076, 0.1379, 0.1731, 0.0925, 0.1419, 0.2880, 0.0919], device='cuda:6'), in_proj_covar=tensor([0.0195, 0.0206, 0.0195, 0.0192, 0.0178, 0.0216, 0.0220, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:08:28,784 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.501e+02 1.840e+02 2.208e+02 4.378e+02, threshold=3.681e+02, percent-clipped=4.0 2023-03-26 19:08:40,242 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:08:49,461 INFO [finetune.py:976] (6/7) Epoch 16, batch 850, loss[loss=0.1887, simple_loss=0.25, pruned_loss=0.06372, over 4864.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2534, pruned_loss=0.05742, over 943397.60 frames. ], batch size: 31, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:08:54,221 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86773.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:09:09,591 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86796.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:09:13,597 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86801.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:09:21,930 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86814.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:09:23,016 INFO [finetune.py:976] (6/7) Epoch 16, batch 900, loss[loss=0.2108, simple_loss=0.2684, pruned_loss=0.07654, over 4738.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2508, pruned_loss=0.05668, over 947899.03 frames. ], batch size: 23, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:09:40,877 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6437, 1.4483, 2.1612, 3.3228, 2.2280, 2.2926, 1.1869, 2.7540], device='cuda:6'), covar=tensor([0.1758, 0.1596, 0.1274, 0.0556, 0.0823, 0.1551, 0.1800, 0.0511], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0117, 0.0134, 0.0164, 0.0101, 0.0139, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:09:40,905 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0929, 2.0312, 2.0982, 1.5618, 2.1049, 2.2142, 2.2999, 1.8412], device='cuda:6'), covar=tensor([0.0647, 0.0669, 0.0705, 0.0902, 0.0651, 0.0689, 0.0552, 0.1053], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0135, 0.0141, 0.0123, 0.0123, 0.0140, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:09:43,094 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.502e+02 1.904e+02 2.205e+02 3.944e+02, threshold=3.808e+02, percent-clipped=2.0 2023-03-26 19:09:56,620 INFO [finetune.py:976] (6/7) Epoch 16, batch 950, loss[loss=0.1024, simple_loss=0.1658, pruned_loss=0.01957, over 3258.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2497, pruned_loss=0.05665, over 949497.44 frames. ], batch size: 14, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:10:02,711 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86875.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:10:04,480 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=86878.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:10:20,386 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=86902.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:10:40,045 INFO [finetune.py:976] (6/7) Epoch 16, batch 1000, loss[loss=0.179, simple_loss=0.263, pruned_loss=0.04751, over 4854.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2528, pruned_loss=0.05841, over 950402.56 frames. ], batch size: 44, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:11:01,991 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=86936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:11:02,694 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9918, 1.7334, 1.5498, 1.6916, 2.2313, 2.1641, 1.7259, 1.6478], device='cuda:6'), covar=tensor([0.0338, 0.0387, 0.0482, 0.0377, 0.0219, 0.0536, 0.0434, 0.0429], device='cuda:6'), in_proj_covar=tensor([0.0093, 0.0108, 0.0143, 0.0112, 0.0099, 0.0106, 0.0097, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2416e-05, 8.3695e-05, 1.1278e-04, 8.6955e-05, 7.7256e-05, 7.8172e-05, 7.2987e-05, 8.2195e-05], device='cuda:6') 2023-03-26 19:11:08,299 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=86939.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:11:12,981 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.633e+02 1.920e+02 2.320e+02 4.350e+02, threshold=3.840e+02, percent-clipped=2.0 2023-03-26 19:11:20,312 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=86950.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:11:34,826 INFO [finetune.py:976] (6/7) Epoch 16, batch 1050, loss[loss=0.2122, simple_loss=0.2803, pruned_loss=0.07205, over 4913.00 frames. ], tot_loss[loss=0.1854, simple_loss=0.2547, pruned_loss=0.05809, over 950837.81 frames. ], batch size: 36, lr: 3.46e-03, grad_scale: 32.0 2023-03-26 19:11:36,890 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 19:11:58,530 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87002.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:12:08,331 INFO [finetune.py:976] (6/7) Epoch 16, batch 1100, loss[loss=0.1659, simple_loss=0.2534, pruned_loss=0.03918, over 4922.00 frames. ], tot_loss[loss=0.1873, simple_loss=0.2562, pruned_loss=0.05924, over 951528.09 frames. ], batch size: 38, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:12:27,409 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.613e+02 1.835e+02 2.273e+02 4.124e+02, threshold=3.670e+02, percent-clipped=1.0 2023-03-26 19:12:39,635 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87063.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:12:41,763 INFO [finetune.py:976] (6/7) Epoch 16, batch 1150, loss[loss=0.125, simple_loss=0.1987, pruned_loss=0.02562, over 4741.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2576, pruned_loss=0.05984, over 954253.78 frames. ], batch size: 26, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:13:01,430 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:13:04,428 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87101.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:13:12,085 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2168, 1.7509, 2.1014, 2.0647, 1.8447, 1.8516, 2.0408, 2.0516], device='cuda:6'), covar=tensor([0.4597, 0.4527, 0.3554, 0.4615, 0.5520, 0.4536, 0.5363, 0.3596], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0239, 0.0258, 0.0270, 0.0268, 0.0241, 0.0281, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:13:15,213 INFO [finetune.py:976] (6/7) Epoch 16, batch 1200, loss[loss=0.1743, simple_loss=0.2326, pruned_loss=0.058, over 4763.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2547, pruned_loss=0.05858, over 954996.34 frames. ], batch size: 28, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:13:27,861 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9154, 1.6036, 2.0663, 1.8523, 1.7416, 1.7071, 1.9115, 1.9623], device='cuda:6'), covar=tensor([0.3582, 0.3382, 0.2720, 0.3499, 0.4330, 0.3615, 0.3948, 0.2605], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0238, 0.0257, 0.0269, 0.0267, 0.0240, 0.0280, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:13:30,261 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1793, 2.1602, 2.2734, 1.5296, 2.1901, 2.3470, 2.3458, 1.9258], device='cuda:6'), covar=tensor([0.0552, 0.0529, 0.0555, 0.0799, 0.0577, 0.0591, 0.0529, 0.0936], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0135, 0.0141, 0.0123, 0.0123, 0.0140, 0.0142, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:13:33,240 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87144.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:13:34,381 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.093e+01 1.544e+02 1.890e+02 2.251e+02 4.242e+02, threshold=3.781e+02, percent-clipped=1.0 2023-03-26 19:13:36,666 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87149.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:13:50,131 INFO [finetune.py:976] (6/7) Epoch 16, batch 1250, loss[loss=0.1818, simple_loss=0.2449, pruned_loss=0.05937, over 4758.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2524, pruned_loss=0.05777, over 956953.42 frames. ], batch size: 54, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:13:53,124 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87170.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:17,423 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2964, 2.1384, 1.7576, 2.1694, 2.2077, 1.9156, 2.4441, 2.2400], device='cuda:6'), covar=tensor([0.1288, 0.2152, 0.3016, 0.2622, 0.2443, 0.1679, 0.2963, 0.1865], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0188, 0.0233, 0.0253, 0.0244, 0.0202, 0.0211, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:14:31,064 INFO [finetune.py:976] (6/7) Epoch 16, batch 1300, loss[loss=0.1902, simple_loss=0.2499, pruned_loss=0.0653, over 4829.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2492, pruned_loss=0.05682, over 955808.84 frames. ], batch size: 39, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:14:32,656 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 19:14:39,124 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 19:14:39,369 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7288, 1.5191, 2.1165, 3.4441, 2.2759, 2.4156, 1.1023, 2.8100], device='cuda:6'), covar=tensor([0.1906, 0.1551, 0.1395, 0.0582, 0.0854, 0.1563, 0.1906, 0.0542], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0118, 0.0135, 0.0166, 0.0102, 0.0140, 0.0126, 0.0103], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:14:41,189 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87230.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:43,592 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87234.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:44,802 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87236.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:14:51,272 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.578e+02 1.944e+02 2.250e+02 4.130e+02, threshold=3.887e+02, percent-clipped=1.0 2023-03-26 19:14:57,990 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5921, 1.5412, 1.3892, 1.5885, 1.2267, 3.7052, 1.6103, 2.0982], device='cuda:6'), covar=tensor([0.3845, 0.2691, 0.2433, 0.2630, 0.1899, 0.0201, 0.2459, 0.1234], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:15:04,399 INFO [finetune.py:976] (6/7) Epoch 16, batch 1350, loss[loss=0.1405, simple_loss=0.2024, pruned_loss=0.03933, over 4237.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2493, pruned_loss=0.05679, over 956363.99 frames. ], batch size: 18, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:15:14,178 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 19:15:16,952 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87284.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:15:21,828 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87291.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:15:44,165 INFO [finetune.py:976] (6/7) Epoch 16, batch 1400, loss[loss=0.1993, simple_loss=0.2634, pruned_loss=0.06758, over 4757.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2524, pruned_loss=0.05793, over 955079.71 frames. ], batch size: 27, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:15:50,981 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7622, 1.3513, 0.7509, 1.6188, 2.0311, 1.3899, 1.6097, 1.6134], device='cuda:6'), covar=tensor([0.1481, 0.2031, 0.2138, 0.1223, 0.1969, 0.1944, 0.1412, 0.1850], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:16:19,253 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.619e+02 1.873e+02 2.376e+02 3.982e+02, threshold=3.745e+02, percent-clipped=1.0 2023-03-26 19:16:31,539 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87358.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:16:38,534 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87362.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:16:41,379 INFO [finetune.py:976] (6/7) Epoch 16, batch 1450, loss[loss=0.1782, simple_loss=0.2426, pruned_loss=0.05688, over 4900.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2537, pruned_loss=0.05789, over 954794.98 frames. ], batch size: 35, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:17:18,468 INFO [finetune.py:976] (6/7) Epoch 16, batch 1500, loss[loss=0.1992, simple_loss=0.2684, pruned_loss=0.06502, over 4888.00 frames. ], tot_loss[loss=0.1868, simple_loss=0.2558, pruned_loss=0.05893, over 956136.67 frames. ], batch size: 32, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:17:23,388 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87423.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:17:32,108 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 19:17:39,130 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.635e+02 2.033e+02 2.421e+02 4.092e+02, threshold=4.066e+02, percent-clipped=1.0 2023-03-26 19:17:44,636 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.81 vs. limit=5.0 2023-03-26 19:17:48,789 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87461.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:17:52,196 INFO [finetune.py:976] (6/7) Epoch 16, batch 1550, loss[loss=0.2067, simple_loss=0.2807, pruned_loss=0.06638, over 4901.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.2553, pruned_loss=0.05867, over 952697.63 frames. ], batch size: 46, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:17:55,190 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87470.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:18,813 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87505.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:25,477 INFO [finetune.py:976] (6/7) Epoch 16, batch 1600, loss[loss=0.1609, simple_loss=0.2307, pruned_loss=0.04554, over 4767.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2527, pruned_loss=0.05747, over 952251.23 frames. ], batch size: 26, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:18:27,238 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87518.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:30,213 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87522.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:38,564 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87534.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:18:46,619 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.707e+01 1.405e+02 1.628e+02 1.991e+02 3.372e+02, threshold=3.256e+02, percent-clipped=0.0 2023-03-26 19:18:48,044 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 19:18:59,344 INFO [finetune.py:976] (6/7) Epoch 16, batch 1650, loss[loss=0.1326, simple_loss=0.2086, pruned_loss=0.0283, over 4935.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2498, pruned_loss=0.05611, over 954061.29 frames. ], batch size: 33, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:18:59,459 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87566.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:19:00,032 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8805, 4.4487, 4.3528, 2.5282, 4.6006, 3.5061, 0.9769, 3.4515], device='cuda:6'), covar=tensor([0.2589, 0.1479, 0.1518, 0.2945, 0.1015, 0.0839, 0.4351, 0.1346], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 19:19:10,145 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87582.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:11,440 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87584.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:13,588 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87586.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:17,438 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87589.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:36,468 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=87605.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:19:43,483 INFO [finetune.py:976] (6/7) Epoch 16, batch 1700, loss[loss=0.174, simple_loss=0.2423, pruned_loss=0.05281, over 4756.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.247, pruned_loss=0.05532, over 952405.46 frames. ], batch size: 27, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:19:46,449 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 19:19:53,111 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2859, 1.7655, 2.3168, 2.2233, 1.9746, 1.9780, 2.2138, 2.1395], device='cuda:6'), covar=tensor([0.4336, 0.4444, 0.3287, 0.3969, 0.5279, 0.3739, 0.4861, 0.3208], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0239, 0.0258, 0.0271, 0.0269, 0.0242, 0.0282, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:20:03,782 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87645.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:20:04,232 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.606e+02 1.879e+02 2.372e+02 9.403e+02, threshold=3.758e+02, percent-clipped=6.0 2023-03-26 19:20:06,829 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:20:08,048 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7221, 1.6525, 1.5453, 1.6921, 1.1268, 3.6068, 1.4658, 1.9046], device='cuda:6'), covar=tensor([0.3367, 0.2545, 0.2207, 0.2405, 0.1849, 0.0182, 0.2613, 0.1298], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0125, 0.0114, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:20:11,628 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87658.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:20:11,686 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1427, 2.1919, 1.7621, 2.2647, 2.0618, 2.0588, 2.0685, 2.9780], device='cuda:6'), covar=tensor([0.4168, 0.4854, 0.3468, 0.4245, 0.4544, 0.2500, 0.4640, 0.1622], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0261, 0.0226, 0.0275, 0.0248, 0.0215, 0.0249, 0.0227], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:20:16,843 INFO [finetune.py:976] (6/7) Epoch 16, batch 1750, loss[loss=0.1561, simple_loss=0.2321, pruned_loss=0.04004, over 4798.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2499, pruned_loss=0.05697, over 952256.15 frames. ], batch size: 29, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:20:16,979 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=87666.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:20:20,025 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4364, 2.3443, 2.5769, 1.7398, 2.4618, 2.6024, 2.5891, 2.1019], device='cuda:6'), covar=tensor([0.0536, 0.0533, 0.0556, 0.0786, 0.0540, 0.0593, 0.0536, 0.0956], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0141, 0.0122, 0.0123, 0.0139, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:20:24,845 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.39 vs. limit=5.0 2023-03-26 19:20:44,152 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87706.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:20:50,641 INFO [finetune.py:976] (6/7) Epoch 16, batch 1800, loss[loss=0.1852, simple_loss=0.2577, pruned_loss=0.05639, over 4232.00 frames. ], tot_loss[loss=0.185, simple_loss=0.2537, pruned_loss=0.05814, over 951356.07 frames. ], batch size: 65, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:20:51,921 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87718.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:21:13,299 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.635e+02 1.907e+02 2.399e+02 5.758e+02, threshold=3.813e+02, percent-clipped=1.0 2023-03-26 19:21:36,200 INFO [finetune.py:976] (6/7) Epoch 16, batch 1850, loss[loss=0.1722, simple_loss=0.2575, pruned_loss=0.04348, over 4794.00 frames. ], tot_loss[loss=0.1846, simple_loss=0.254, pruned_loss=0.05754, over 954384.59 frames. ], batch size: 51, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:22:16,310 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6164, 1.6314, 1.5482, 1.6805, 1.3269, 3.0244, 1.4956, 1.8707], device='cuda:6'), covar=tensor([0.2892, 0.2080, 0.1865, 0.1970, 0.1571, 0.0320, 0.2633, 0.1085], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0116, 0.0121, 0.0125, 0.0115, 0.0097, 0.0097, 0.0097], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:22:22,283 INFO [finetune.py:976] (6/7) Epoch 16, batch 1900, loss[loss=0.1806, simple_loss=0.2614, pruned_loss=0.04988, over 4811.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2557, pruned_loss=0.05827, over 954276.03 frames. ], batch size: 38, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:22:23,021 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87817.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:22:24,889 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1394, 1.8978, 1.4741, 0.6737, 1.7082, 1.7114, 1.5856, 1.7303], device='cuda:6'), covar=tensor([0.0828, 0.0713, 0.1406, 0.1829, 0.1141, 0.2338, 0.2173, 0.0846], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0196, 0.0201, 0.0184, 0.0213, 0.0208, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:22:28,036 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 19:22:38,907 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5715, 3.5290, 3.4018, 1.6145, 3.6989, 2.7348, 0.9569, 2.5530], device='cuda:6'), covar=tensor([0.2372, 0.2073, 0.1589, 0.3179, 0.1086, 0.0929, 0.3926, 0.1339], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0159, 0.0128, 0.0156, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 19:22:41,853 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.250e+02 1.530e+02 1.792e+02 2.215e+02 4.706e+02, threshold=3.584e+02, percent-clipped=3.0 2023-03-26 19:22:45,734 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.36 vs. limit=5.0 2023-03-26 19:22:53,010 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87861.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:22:55,644 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 19:22:55,973 INFO [finetune.py:976] (6/7) Epoch 16, batch 1950, loss[loss=0.1678, simple_loss=0.2436, pruned_loss=0.04604, over 4710.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2547, pruned_loss=0.05747, over 956275.14 frames. ], batch size: 54, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:22:57,326 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6543, 1.6349, 1.4270, 1.7756, 1.9956, 1.7794, 1.2331, 1.4232], device='cuda:6'), covar=tensor([0.2420, 0.2091, 0.2103, 0.1753, 0.1836, 0.1293, 0.2710, 0.2071], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0208, 0.0211, 0.0192, 0.0243, 0.0185, 0.0216, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:23:09,168 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=87886.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:23:19,673 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5001, 1.4978, 1.9210, 2.9051, 1.9496, 2.2749, 0.9994, 2.4357], device='cuda:6'), covar=tensor([0.1638, 0.1296, 0.1112, 0.0583, 0.0837, 0.1295, 0.1637, 0.0505], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0138, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:23:29,596 INFO [finetune.py:976] (6/7) Epoch 16, batch 2000, loss[loss=0.1835, simple_loss=0.251, pruned_loss=0.05797, over 4901.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2531, pruned_loss=0.05766, over 956956.89 frames. ], batch size: 32, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:23:41,097 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=87934.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:23:45,287 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87940.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 19:23:48,825 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:23:49,303 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.562e+02 1.812e+02 2.199e+02 5.123e+02, threshold=3.624e+02, percent-clipped=1.0 2023-03-26 19:23:59,912 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=87961.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:24:02,864 INFO [finetune.py:976] (6/7) Epoch 16, batch 2050, loss[loss=0.1864, simple_loss=0.249, pruned_loss=0.06192, over 4810.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2498, pruned_loss=0.05613, over 956614.39 frames. ], batch size: 38, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:24:37,647 INFO [finetune.py:976] (6/7) Epoch 16, batch 2100, loss[loss=0.1865, simple_loss=0.2534, pruned_loss=0.05985, over 4861.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2491, pruned_loss=0.05617, over 956106.91 frames. ], batch size: 31, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:24:41,376 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88018.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:24:44,992 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5013, 1.3459, 1.9264, 2.9319, 1.9756, 2.2277, 0.9256, 2.4202], device='cuda:6'), covar=tensor([0.1843, 0.1535, 0.1246, 0.0581, 0.0879, 0.1283, 0.1871, 0.0607], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0164, 0.0101, 0.0139, 0.0125, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:24:54,107 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6265, 1.8292, 2.1672, 1.9918, 1.9814, 4.4069, 1.7509, 1.9564], device='cuda:6'), covar=tensor([0.1005, 0.1710, 0.1188, 0.1000, 0.1539, 0.0227, 0.1431, 0.1674], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0081, 0.0074, 0.0077, 0.0092, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:24:59,877 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.339e+01 1.671e+02 1.963e+02 2.403e+02 4.597e+02, threshold=3.926e+02, percent-clipped=2.0 2023-03-26 19:25:13,435 INFO [finetune.py:976] (6/7) Epoch 16, batch 2150, loss[loss=0.1509, simple_loss=0.2148, pruned_loss=0.04352, over 4223.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2523, pruned_loss=0.05727, over 954520.00 frames. ], batch size: 17, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:25:13,502 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88066.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:25:35,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=88100.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:25:46,509 INFO [finetune.py:976] (6/7) Epoch 16, batch 2200, loss[loss=0.1998, simple_loss=0.2833, pruned_loss=0.05817, over 4842.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2541, pruned_loss=0.05774, over 953238.95 frames. ], batch size: 49, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:25:47,211 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88117.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:25:52,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8420, 3.3724, 3.5305, 3.7095, 3.5977, 3.3726, 3.9221, 1.2268], device='cuda:6'), covar=tensor([0.0913, 0.1006, 0.1000, 0.1118, 0.1441, 0.1662, 0.0828, 0.5577], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0245, 0.0276, 0.0291, 0.0335, 0.0282, 0.0298, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:26:05,785 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.658e+02 1.940e+02 2.471e+02 6.986e+02, threshold=3.880e+02, percent-clipped=3.0 2023-03-26 19:26:15,395 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88161.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:26:15,436 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=88161.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:26:18,734 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88165.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:26:19,285 INFO [finetune.py:976] (6/7) Epoch 16, batch 2250, loss[loss=0.1824, simple_loss=0.2406, pruned_loss=0.06207, over 4759.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2543, pruned_loss=0.05755, over 954856.23 frames. ], batch size: 26, lr: 3.45e-03, grad_scale: 32.0 2023-03-26 19:26:31,474 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1774, 1.8181, 2.1815, 2.1178, 1.8075, 1.8621, 2.0958, 1.9840], device='cuda:6'), covar=tensor([0.3644, 0.4083, 0.3085, 0.3722, 0.4781, 0.3873, 0.4619, 0.3185], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0236, 0.0256, 0.0268, 0.0268, 0.0241, 0.0279, 0.0236], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:26:56,476 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88209.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:27:05,821 INFO [finetune.py:976] (6/7) Epoch 16, batch 2300, loss[loss=0.2013, simple_loss=0.2543, pruned_loss=0.07417, over 4258.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2549, pruned_loss=0.0577, over 954339.69 frames. ], batch size: 66, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:27:29,753 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88240.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:27:33,298 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:27:34,390 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.462e+02 1.840e+02 2.103e+02 3.666e+02, threshold=3.679e+02, percent-clipped=0.0 2023-03-26 19:27:43,888 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88261.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:27:47,287 INFO [finetune.py:976] (6/7) Epoch 16, batch 2350, loss[loss=0.1448, simple_loss=0.2183, pruned_loss=0.03567, over 4754.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2523, pruned_loss=0.05693, over 955253.97 frames. ], batch size: 27, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:27:50,974 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2165, 2.0660, 1.8113, 1.9821, 2.0056, 1.9658, 2.0450, 2.7301], device='cuda:6'), covar=tensor([0.3827, 0.4195, 0.3330, 0.3385, 0.3597, 0.2360, 0.3511, 0.1658], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0225, 0.0274, 0.0247, 0.0214, 0.0249, 0.0226], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:28:01,669 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88288.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:28:03,564 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=88291.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:28:04,699 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:28:07,105 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2217, 2.1231, 1.7101, 2.0590, 2.1916, 1.8713, 2.5286, 2.2136], device='cuda:6'), covar=tensor([0.1221, 0.2000, 0.2952, 0.2477, 0.2389, 0.1554, 0.2990, 0.1601], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0187, 0.0234, 0.0253, 0.0244, 0.0201, 0.0211, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:28:15,423 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88309.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:28:20,093 INFO [finetune.py:976] (6/7) Epoch 16, batch 2400, loss[loss=0.1958, simple_loss=0.2459, pruned_loss=0.07278, over 4829.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2501, pruned_loss=0.05622, over 956222.23 frames. ], batch size: 33, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:28:40,400 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.525e+02 1.766e+02 2.106e+02 3.774e+02, threshold=3.532e+02, percent-clipped=1.0 2023-03-26 19:28:44,075 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=88352.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:28:52,861 INFO [finetune.py:976] (6/7) Epoch 16, batch 2450, loss[loss=0.1708, simple_loss=0.2379, pruned_loss=0.05189, over 4754.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2484, pruned_loss=0.05649, over 955241.41 frames. ], batch size: 27, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:29:26,835 INFO [finetune.py:976] (6/7) Epoch 16, batch 2500, loss[loss=0.1484, simple_loss=0.227, pruned_loss=0.03489, over 4788.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2512, pruned_loss=0.05771, over 956269.84 frames. ], batch size: 29, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:29:48,237 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.743e+02 2.000e+02 2.601e+02 5.270e+02, threshold=4.000e+02, percent-clipped=5.0 2023-03-26 19:29:54,236 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=88456.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:30:00,708 INFO [finetune.py:976] (6/7) Epoch 16, batch 2550, loss[loss=0.2024, simple_loss=0.2706, pruned_loss=0.06705, over 4790.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2552, pruned_loss=0.05893, over 956433.96 frames. ], batch size: 54, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:30:17,735 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1697, 1.8972, 1.4819, 0.6020, 1.6664, 1.7437, 1.6227, 1.7734], device='cuda:6'), covar=tensor([0.0757, 0.0752, 0.1271, 0.1741, 0.1159, 0.1921, 0.2023, 0.0760], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0196, 0.0199, 0.0183, 0.0212, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:30:25,653 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5857, 1.4886, 1.4552, 1.4975, 1.1052, 2.9678, 1.1180, 1.5141], device='cuda:6'), covar=tensor([0.3291, 0.2597, 0.2183, 0.2486, 0.1798, 0.0294, 0.2711, 0.1350], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0124, 0.0114, 0.0097, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:30:33,909 INFO [finetune.py:976] (6/7) Epoch 16, batch 2600, loss[loss=0.1979, simple_loss=0.2536, pruned_loss=0.0711, over 4862.00 frames. ], tot_loss[loss=0.1883, simple_loss=0.2571, pruned_loss=0.05973, over 953714.40 frames. ], batch size: 31, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:30:43,134 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-26 19:30:55,466 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.436e+01 1.666e+02 1.940e+02 2.279e+02 3.712e+02, threshold=3.880e+02, percent-clipped=0.0 2023-03-26 19:31:07,484 INFO [finetune.py:976] (6/7) Epoch 16, batch 2650, loss[loss=0.1697, simple_loss=0.2376, pruned_loss=0.05091, over 4924.00 frames. ], tot_loss[loss=0.1887, simple_loss=0.2579, pruned_loss=0.05975, over 954859.95 frames. ], batch size: 33, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:31:41,337 INFO [finetune.py:976] (6/7) Epoch 16, batch 2700, loss[loss=0.2027, simple_loss=0.2623, pruned_loss=0.07152, over 4734.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2539, pruned_loss=0.05777, over 954364.14 frames. ], batch size: 54, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:32:05,538 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.504e+02 1.815e+02 2.296e+02 4.078e+02, threshold=3.631e+02, percent-clipped=1.0 2023-03-26 19:32:05,629 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=88647.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:32:26,941 INFO [finetune.py:976] (6/7) Epoch 16, batch 2750, loss[loss=0.1892, simple_loss=0.2509, pruned_loss=0.06379, over 4909.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2516, pruned_loss=0.05712, over 955960.52 frames. ], batch size: 37, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:32:42,740 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-26 19:32:51,370 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 19:33:17,053 INFO [finetune.py:976] (6/7) Epoch 16, batch 2800, loss[loss=0.1739, simple_loss=0.2468, pruned_loss=0.05046, over 4771.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2486, pruned_loss=0.05648, over 954518.40 frames. ], batch size: 28, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:33:37,857 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.207e+02 1.516e+02 1.815e+02 2.286e+02 3.246e+02, threshold=3.631e+02, percent-clipped=0.0 2023-03-26 19:33:43,919 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 19:33:44,962 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88756.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:33:50,910 INFO [finetune.py:976] (6/7) Epoch 16, batch 2850, loss[loss=0.2333, simple_loss=0.2955, pruned_loss=0.08554, over 4901.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2474, pruned_loss=0.05571, over 955573.43 frames. ], batch size: 35, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:33:54,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3832, 1.9308, 2.4503, 2.2950, 2.0725, 2.0624, 2.2545, 2.2233], device='cuda:6'), covar=tensor([0.4233, 0.4261, 0.3291, 0.4116, 0.5166, 0.3984, 0.4609, 0.3360], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0236, 0.0256, 0.0269, 0.0267, 0.0241, 0.0278, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:34:17,619 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88804.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:34:24,823 INFO [finetune.py:976] (6/7) Epoch 16, batch 2900, loss[loss=0.2109, simple_loss=0.2789, pruned_loss=0.07142, over 4912.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.251, pruned_loss=0.05697, over 955075.41 frames. ], batch size: 36, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:34:37,503 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9217, 1.6783, 1.5647, 1.3051, 1.6742, 1.7320, 1.6527, 2.1987], device='cuda:6'), covar=tensor([0.3946, 0.4568, 0.3485, 0.3954, 0.3952, 0.2311, 0.3759, 0.1917], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0227, 0.0275, 0.0249, 0.0216, 0.0250, 0.0228], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:34:38,106 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7313, 1.5267, 1.4118, 0.9085, 1.5380, 1.6576, 1.6875, 1.3649], device='cuda:6'), covar=tensor([0.0692, 0.0475, 0.0508, 0.0433, 0.0471, 0.0535, 0.0298, 0.0607], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0151, 0.0124, 0.0128, 0.0131, 0.0128, 0.0143, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.2324e-05, 1.1015e-04, 8.8809e-05, 9.1032e-05, 9.2887e-05, 9.2425e-05, 1.0266e-04, 1.0640e-04], device='cuda:6') 2023-03-26 19:34:45,211 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.632e+02 1.968e+02 2.500e+02 4.348e+02, threshold=3.936e+02, percent-clipped=6.0 2023-03-26 19:34:58,813 INFO [finetune.py:976] (6/7) Epoch 16, batch 2950, loss[loss=0.1847, simple_loss=0.2679, pruned_loss=0.05075, over 4754.00 frames. ], tot_loss[loss=0.1855, simple_loss=0.2543, pruned_loss=0.05832, over 955503.83 frames. ], batch size: 27, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:35:32,644 INFO [finetune.py:976] (6/7) Epoch 16, batch 3000, loss[loss=0.1631, simple_loss=0.2335, pruned_loss=0.04637, over 4778.00 frames. ], tot_loss[loss=0.1866, simple_loss=0.2556, pruned_loss=0.05879, over 956221.39 frames. ], batch size: 28, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:35:32,644 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 19:35:51,513 INFO [finetune.py:1010] (6/7) Epoch 16, validation: loss=0.1563, simple_loss=0.2263, pruned_loss=0.04316, over 2265189.00 frames. 2023-03-26 19:35:51,514 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 19:35:52,905 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2655, 2.1661, 1.8409, 2.3538, 2.0919, 2.0515, 2.0288, 2.9735], device='cuda:6'), covar=tensor([0.4028, 0.5322, 0.3599, 0.4409, 0.4675, 0.2421, 0.4845, 0.1611], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0227, 0.0275, 0.0249, 0.0216, 0.0251, 0.0229], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:36:10,684 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.661e+02 1.990e+02 2.439e+02 3.546e+02, threshold=3.980e+02, percent-clipped=0.0 2023-03-26 19:36:11,271 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=88947.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:36:23,198 INFO [finetune.py:976] (6/7) Epoch 16, batch 3050, loss[loss=0.2215, simple_loss=0.278, pruned_loss=0.08256, over 4169.00 frames. ], tot_loss[loss=0.1878, simple_loss=0.2573, pruned_loss=0.05916, over 956610.36 frames. ], batch size: 65, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:36:39,238 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1482, 2.0327, 2.1494, 1.5540, 2.1166, 2.1974, 2.1389, 1.8245], device='cuda:6'), covar=tensor([0.0504, 0.0600, 0.0584, 0.0808, 0.0578, 0.0690, 0.0605, 0.0988], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0137, 0.0144, 0.0125, 0.0125, 0.0143, 0.0144, 0.0167], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:36:43,523 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=88995.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:36:57,478 INFO [finetune.py:976] (6/7) Epoch 16, batch 3100, loss[loss=0.1593, simple_loss=0.2308, pruned_loss=0.04397, over 4849.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2541, pruned_loss=0.05818, over 954839.96 frames. ], batch size: 49, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:37:06,374 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89027.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:37:06,434 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9879, 1.4034, 2.0703, 1.9734, 1.8015, 1.6792, 1.9148, 1.8117], device='cuda:6'), covar=tensor([0.3633, 0.3789, 0.3012, 0.3692, 0.4247, 0.3593, 0.4195, 0.3045], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0238, 0.0258, 0.0271, 0.0269, 0.0243, 0.0280, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:37:13,115 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 19:37:20,956 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.966e+01 1.506e+02 1.838e+02 2.198e+02 3.411e+02, threshold=3.676e+02, percent-clipped=0.0 2023-03-26 19:37:33,685 INFO [finetune.py:976] (6/7) Epoch 16, batch 3150, loss[loss=0.2073, simple_loss=0.2772, pruned_loss=0.06869, over 4826.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.251, pruned_loss=0.05738, over 955818.63 frames. ], batch size: 40, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:37:56,590 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:38:25,685 INFO [finetune.py:976] (6/7) Epoch 16, batch 3200, loss[loss=0.1629, simple_loss=0.2357, pruned_loss=0.04504, over 4785.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2477, pruned_loss=0.05613, over 956400.01 frames. ], batch size: 29, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:38:34,535 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 19:38:50,100 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.609e+02 1.908e+02 2.339e+02 4.086e+02, threshold=3.816e+02, percent-clipped=1.0 2023-03-26 19:39:02,082 INFO [finetune.py:976] (6/7) Epoch 16, batch 3250, loss[loss=0.1995, simple_loss=0.268, pruned_loss=0.06546, over 4824.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2479, pruned_loss=0.05615, over 955726.68 frames. ], batch size: 33, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:39:04,501 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89169.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:39:35,940 INFO [finetune.py:976] (6/7) Epoch 16, batch 3300, loss[loss=0.2, simple_loss=0.2794, pruned_loss=0.0603, over 4804.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2514, pruned_loss=0.05708, over 956240.18 frames. ], batch size: 45, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:39:45,595 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89230.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:39:56,763 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.235e+02 1.765e+02 2.004e+02 2.308e+02 3.942e+02, threshold=4.007e+02, percent-clipped=1.0 2023-03-26 19:39:57,498 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3674, 2.6766, 2.5025, 1.8041, 2.5645, 2.6595, 2.5086, 2.4225], device='cuda:6'), covar=tensor([0.0654, 0.0534, 0.0730, 0.0923, 0.0701, 0.0701, 0.0700, 0.0862], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0136, 0.0143, 0.0125, 0.0125, 0.0142, 0.0143, 0.0165], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:40:03,388 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9328, 1.7798, 1.5411, 1.6404, 1.6191, 1.6267, 1.6998, 2.3914], device='cuda:6'), covar=tensor([0.3674, 0.3712, 0.2973, 0.3351, 0.3745, 0.2205, 0.3328, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0227, 0.0276, 0.0249, 0.0217, 0.0250, 0.0229], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:40:09,188 INFO [finetune.py:976] (6/7) Epoch 16, batch 3350, loss[loss=0.1556, simple_loss=0.2322, pruned_loss=0.03945, over 4775.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.255, pruned_loss=0.05841, over 956633.99 frames. ], batch size: 29, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:40:42,689 INFO [finetune.py:976] (6/7) Epoch 16, batch 3400, loss[loss=0.2107, simple_loss=0.2855, pruned_loss=0.06797, over 4845.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2565, pruned_loss=0.05936, over 955089.10 frames. ], batch size: 44, lr: 3.44e-03, grad_scale: 32.0 2023-03-26 19:41:12,549 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.581e+02 1.832e+02 2.219e+02 5.301e+02, threshold=3.664e+02, percent-clipped=1.0 2023-03-26 19:41:24,426 INFO [finetune.py:976] (6/7) Epoch 16, batch 3450, loss[loss=0.1455, simple_loss=0.226, pruned_loss=0.03247, over 4898.00 frames. ], tot_loss[loss=0.1871, simple_loss=0.2564, pruned_loss=0.05885, over 956478.13 frames. ], batch size: 36, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:41:29,844 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:41:35,744 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89383.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:41:48,192 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9224, 4.5007, 4.2900, 2.3630, 4.6201, 3.4480, 0.6871, 3.1830], device='cuda:6'), covar=tensor([0.2570, 0.1745, 0.1311, 0.3053, 0.0639, 0.0914, 0.4637, 0.1464], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0175, 0.0159, 0.0128, 0.0158, 0.0123, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 19:41:58,332 INFO [finetune.py:976] (6/7) Epoch 16, batch 3500, loss[loss=0.176, simple_loss=0.2354, pruned_loss=0.05833, over 4320.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2535, pruned_loss=0.05744, over 956774.23 frames. ], batch size: 19, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:42:04,412 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89425.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:42:10,964 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89435.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:42:13,377 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89439.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:42:18,651 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.517e+02 1.946e+02 2.225e+02 4.216e+02, threshold=3.891e+02, percent-clipped=3.0 2023-03-26 19:42:31,135 INFO [finetune.py:976] (6/7) Epoch 16, batch 3550, loss[loss=0.1684, simple_loss=0.2445, pruned_loss=0.04617, over 4752.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2508, pruned_loss=0.05685, over 956094.96 frames. ], batch size: 23, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:42:31,230 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3341, 1.5983, 1.0657, 2.2783, 2.6785, 2.0784, 1.9150, 2.0640], device='cuda:6'), covar=tensor([0.1444, 0.2189, 0.1893, 0.1100, 0.1671, 0.1646, 0.1467, 0.2027], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0111, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:42:40,389 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-26 19:42:43,936 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:42:53,371 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89500.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:42:59,277 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89509.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:43:06,069 INFO [finetune.py:976] (6/7) Epoch 16, batch 3600, loss[loss=0.1614, simple_loss=0.2325, pruned_loss=0.04518, over 4847.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2469, pruned_loss=0.05496, over 955987.07 frames. ], batch size: 49, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:43:14,221 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89525.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:43:37,737 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.924e+01 1.526e+02 1.890e+02 2.215e+02 3.895e+02, threshold=3.780e+02, percent-clipped=1.0 2023-03-26 19:43:55,669 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2600, 2.6109, 2.4939, 1.2352, 2.7202, 2.1799, 1.8378, 2.3904], device='cuda:6'), covar=tensor([0.0900, 0.1108, 0.2217, 0.2719, 0.1821, 0.2605, 0.2880, 0.1504], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0196, 0.0200, 0.0182, 0.0213, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:44:03,076 INFO [finetune.py:976] (6/7) Epoch 16, batch 3650, loss[loss=0.2, simple_loss=0.2814, pruned_loss=0.05932, over 4826.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2502, pruned_loss=0.05645, over 957104.98 frames. ], batch size: 47, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:44:06,134 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89570.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:44:27,373 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3525, 2.1551, 1.9833, 2.2470, 2.0725, 2.0974, 2.0854, 2.8371], device='cuda:6'), covar=tensor([0.3641, 0.4741, 0.3395, 0.3985, 0.4157, 0.2499, 0.4034, 0.1610], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0228, 0.0277, 0.0250, 0.0218, 0.0251, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:44:36,730 INFO [finetune.py:976] (6/7) Epoch 16, batch 3700, loss[loss=0.1725, simple_loss=0.2523, pruned_loss=0.04637, over 4908.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2533, pruned_loss=0.05754, over 954711.60 frames. ], batch size: 36, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:44:47,570 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6566, 1.6297, 1.9681, 1.2916, 1.6536, 1.9100, 1.5489, 2.1074], device='cuda:6'), covar=tensor([0.1354, 0.2071, 0.1341, 0.1818, 0.1016, 0.1334, 0.2917, 0.0849], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0203, 0.0190, 0.0189, 0.0176, 0.0211, 0.0216, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:44:50,584 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4005, 1.4250, 1.6821, 1.6592, 1.4959, 3.1721, 1.2659, 1.5010], device='cuda:6'), covar=tensor([0.1009, 0.1780, 0.1105, 0.0962, 0.1670, 0.0255, 0.1592, 0.1842], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0092, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:44:56,799 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 19:44:57,075 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.548e+01 1.593e+02 1.994e+02 2.376e+02 3.738e+02, threshold=3.989e+02, percent-clipped=0.0 2023-03-26 19:45:03,066 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9928, 1.7540, 2.2668, 1.4824, 2.0255, 2.2368, 1.6670, 2.4748], device='cuda:6'), covar=tensor([0.1246, 0.2106, 0.1360, 0.2145, 0.1003, 0.1455, 0.2524, 0.0754], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0203, 0.0190, 0.0189, 0.0175, 0.0211, 0.0216, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:45:04,297 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8823, 3.0328, 2.9454, 2.1476, 2.9006, 3.2663, 3.1469, 2.7328], device='cuda:6'), covar=tensor([0.0644, 0.0527, 0.0659, 0.0935, 0.0633, 0.0568, 0.0566, 0.0839], device='cuda:6'), in_proj_covar=tensor([0.0137, 0.0137, 0.0145, 0.0126, 0.0126, 0.0143, 0.0144, 0.0168], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:45:10,212 INFO [finetune.py:976] (6/7) Epoch 16, batch 3750, loss[loss=0.1624, simple_loss=0.211, pruned_loss=0.05686, over 4022.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2551, pruned_loss=0.05831, over 954690.39 frames. ], batch size: 17, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:45:19,315 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:45:21,106 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=89683.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:45:43,317 INFO [finetune.py:976] (6/7) Epoch 16, batch 3800, loss[loss=0.2308, simple_loss=0.3009, pruned_loss=0.08037, over 4100.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2554, pruned_loss=0.05812, over 954806.83 frames. ], batch size: 65, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:45:52,766 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:45:53,367 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=89731.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:46:00,040 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:46:03,525 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.549e+02 1.882e+02 2.353e+02 4.344e+02, threshold=3.764e+02, percent-clipped=2.0 2023-03-26 19:46:09,911 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 19:46:19,024 INFO [finetune.py:976] (6/7) Epoch 16, batch 3850, loss[loss=0.2064, simple_loss=0.2644, pruned_loss=0.07417, over 4711.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2535, pruned_loss=0.05759, over 953088.71 frames. ], batch size: 23, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:46:29,109 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:46:38,085 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89795.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:46:38,109 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6490, 1.1354, 0.8502, 1.4665, 1.9847, 1.0182, 1.3019, 1.4289], device='cuda:6'), covar=tensor([0.2007, 0.3126, 0.2526, 0.1678, 0.2419, 0.2671, 0.2166, 0.2832], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0111, 0.0093, 0.0119, 0.0095, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:46:38,747 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2870, 2.1915, 1.8865, 2.3140, 2.8003, 2.2872, 2.2734, 1.7008], device='cuda:6'), covar=tensor([0.1936, 0.1837, 0.1779, 0.1472, 0.1648, 0.1043, 0.1816, 0.1832], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0208, 0.0210, 0.0191, 0.0242, 0.0184, 0.0215, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:46:39,354 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6580, 2.3700, 1.8488, 0.8887, 2.1411, 2.0503, 1.8643, 2.1913], device='cuda:6'), covar=tensor([0.0751, 0.0790, 0.1379, 0.1914, 0.1273, 0.1785, 0.1837, 0.0840], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0195, 0.0200, 0.0182, 0.0213, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:46:44,631 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5747, 3.9329, 4.0878, 4.3821, 4.2847, 4.1108, 4.6902, 1.5132], device='cuda:6'), covar=tensor([0.0714, 0.0888, 0.0806, 0.0958, 0.1139, 0.1549, 0.0578, 0.5373], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0245, 0.0276, 0.0293, 0.0332, 0.0281, 0.0298, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:46:52,159 INFO [finetune.py:976] (6/7) Epoch 16, batch 3900, loss[loss=0.1875, simple_loss=0.2653, pruned_loss=0.05487, over 4828.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2527, pruned_loss=0.05804, over 955915.48 frames. ], batch size: 33, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:46:58,255 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=89825.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:12,473 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.500e+02 1.857e+02 2.274e+02 5.172e+02, threshold=3.715e+02, percent-clipped=1.0 2023-03-26 19:47:15,641 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5537, 1.4488, 1.3964, 1.5174, 1.8658, 1.7071, 1.5782, 1.3882], device='cuda:6'), covar=tensor([0.0398, 0.0299, 0.0551, 0.0308, 0.0195, 0.0541, 0.0320, 0.0384], device='cuda:6'), in_proj_covar=tensor([0.0096, 0.0109, 0.0145, 0.0114, 0.0101, 0.0109, 0.0100, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.4242e-05, 8.4452e-05, 1.1452e-04, 8.7791e-05, 7.8347e-05, 8.0138e-05, 7.4841e-05, 8.3367e-05], device='cuda:6') 2023-03-26 19:47:23,888 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=89865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:23,921 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=89865.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:24,440 INFO [finetune.py:976] (6/7) Epoch 16, batch 3950, loss[loss=0.1417, simple_loss=0.2165, pruned_loss=0.03342, over 4851.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2494, pruned_loss=0.05641, over 956159.53 frames. ], batch size: 47, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:47:29,794 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=89873.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:47:30,486 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1782, 2.0607, 1.7904, 1.7359, 2.1804, 1.9159, 2.2602, 2.1603], device='cuda:6'), covar=tensor([0.1327, 0.2087, 0.2844, 0.2626, 0.2505, 0.1690, 0.2653, 0.1790], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0187, 0.0233, 0.0252, 0.0243, 0.0201, 0.0211, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:47:47,669 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1185, 2.0078, 2.1126, 1.4112, 2.1626, 2.2340, 2.2039, 1.7943], device='cuda:6'), covar=tensor([0.0579, 0.0621, 0.0650, 0.0911, 0.0632, 0.0638, 0.0576, 0.1004], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0137, 0.0144, 0.0126, 0.0126, 0.0143, 0.0144, 0.0167], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:47:57,663 INFO [finetune.py:976] (6/7) Epoch 16, batch 4000, loss[loss=0.2075, simple_loss=0.2769, pruned_loss=0.06904, over 4897.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.249, pruned_loss=0.05617, over 956835.70 frames. ], batch size: 35, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:48:04,743 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=89926.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:48:08,786 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3061, 2.1515, 2.2824, 1.5278, 2.2503, 2.3328, 2.3733, 1.8825], device='cuda:6'), covar=tensor([0.0533, 0.0605, 0.0590, 0.0901, 0.0593, 0.0609, 0.0507, 0.1023], device='cuda:6'), in_proj_covar=tensor([0.0136, 0.0136, 0.0143, 0.0125, 0.0125, 0.0143, 0.0143, 0.0167], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:48:17,805 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5171, 1.3968, 1.7499, 1.7640, 1.5656, 3.2653, 1.3440, 1.5108], device='cuda:6'), covar=tensor([0.1013, 0.1937, 0.1301, 0.0958, 0.1589, 0.0264, 0.1543, 0.1863], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0078, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:48:18,313 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.466e+01 1.676e+02 1.974e+02 2.469e+02 4.779e+02, threshold=3.947e+02, percent-clipped=6.0 2023-03-26 19:48:32,940 INFO [finetune.py:976] (6/7) Epoch 16, batch 4050, loss[loss=0.1917, simple_loss=0.2555, pruned_loss=0.06396, over 4082.00 frames. ], tot_loss[loss=0.184, simple_loss=0.2523, pruned_loss=0.05784, over 954803.97 frames. ], batch size: 17, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:49:01,489 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 19:49:29,804 INFO [finetune.py:976] (6/7) Epoch 16, batch 4100, loss[loss=0.2021, simple_loss=0.2729, pruned_loss=0.06564, over 4905.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2533, pruned_loss=0.05769, over 952494.95 frames. ], batch size: 37, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:49:30,519 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9883, 1.6486, 2.3498, 1.4938, 2.0859, 2.1617, 1.6215, 2.3738], device='cuda:6'), covar=tensor([0.1243, 0.2122, 0.1633, 0.2036, 0.0955, 0.1561, 0.2737, 0.0859], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0206, 0.0192, 0.0192, 0.0177, 0.0214, 0.0220, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:49:32,994 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7238, 1.7065, 1.5694, 1.8834, 2.1118, 1.9392, 1.6181, 1.4316], device='cuda:6'), covar=tensor([0.2080, 0.1894, 0.1751, 0.1463, 0.1853, 0.1170, 0.2336, 0.1818], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0212, 0.0192, 0.0244, 0.0186, 0.0216, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:49:43,229 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:49:46,969 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:49:51,492 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6532, 1.5915, 2.3600, 3.2156, 2.2193, 2.4695, 1.5860, 2.6219], device='cuda:6'), covar=tensor([0.1598, 0.1333, 0.1069, 0.0567, 0.0743, 0.1787, 0.1303, 0.0533], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0166, 0.0101, 0.0140, 0.0125, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:49:54,442 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.559e+02 1.839e+02 2.160e+02 6.359e+02, threshold=3.678e+02, percent-clipped=1.0 2023-03-26 19:49:58,248 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6068, 1.4837, 1.8971, 1.2423, 1.6515, 1.7500, 1.4342, 2.0042], device='cuda:6'), covar=tensor([0.1320, 0.2250, 0.1336, 0.1829, 0.0996, 0.1445, 0.2913, 0.0888], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0206, 0.0193, 0.0192, 0.0178, 0.0214, 0.0220, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:50:06,384 INFO [finetune.py:976] (6/7) Epoch 16, batch 4150, loss[loss=0.1838, simple_loss=0.2556, pruned_loss=0.05602, over 4803.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2552, pruned_loss=0.05835, over 954046.43 frames. ], batch size: 45, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:50:14,192 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:16,550 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90081.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:26,448 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90095.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 19:50:39,514 INFO [finetune.py:976] (6/7) Epoch 16, batch 4200, loss[loss=0.1558, simple_loss=0.2426, pruned_loss=0.03449, over 4767.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2556, pruned_loss=0.05781, over 955743.84 frames. ], batch size: 28, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:50:47,976 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90129.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:48,628 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90130.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:50:56,647 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3229, 2.9455, 3.0987, 3.2256, 3.0977, 2.9115, 3.3572, 1.0306], device='cuda:6'), covar=tensor([0.1012, 0.1034, 0.1000, 0.1154, 0.1543, 0.1676, 0.1061, 0.5191], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0246, 0.0278, 0.0296, 0.0335, 0.0282, 0.0300, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:50:57,837 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90143.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 19:51:00,650 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.547e+02 1.785e+02 2.134e+02 3.751e+02, threshold=3.570e+02, percent-clipped=1.0 2023-03-26 19:51:12,332 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90165.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:12,435 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 19:51:12,839 INFO [finetune.py:976] (6/7) Epoch 16, batch 4250, loss[loss=0.2339, simple_loss=0.2854, pruned_loss=0.09118, over 4753.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.253, pruned_loss=0.05687, over 956261.97 frames. ], batch size: 27, lr: 3.43e-03, grad_scale: 64.0 2023-03-26 19:51:29,553 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90191.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:43,669 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90213.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:45,896 INFO [finetune.py:976] (6/7) Epoch 16, batch 4300, loss[loss=0.1482, simple_loss=0.2226, pruned_loss=0.0369, over 4853.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2511, pruned_loss=0.05651, over 955637.81 frames. ], batch size: 44, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:51:48,957 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90221.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:51:56,165 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90232.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:51:59,203 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7997, 1.2453, 0.8735, 1.6496, 2.1737, 1.5460, 1.4723, 1.7855], device='cuda:6'), covar=tensor([0.1405, 0.2005, 0.2023, 0.1226, 0.1804, 0.1833, 0.1402, 0.1732], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0096, 0.0112, 0.0094, 0.0120, 0.0096, 0.0099, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 19:52:07,168 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.180e+02 1.551e+02 1.797e+02 2.190e+02 3.764e+02, threshold=3.594e+02, percent-clipped=1.0 2023-03-26 19:52:18,555 INFO [finetune.py:976] (6/7) Epoch 16, batch 4350, loss[loss=0.2198, simple_loss=0.2784, pruned_loss=0.08062, over 4731.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2493, pruned_loss=0.05668, over 956249.28 frames. ], batch size: 59, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:52:36,981 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:52:47,695 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4451, 3.8845, 4.0487, 4.1648, 4.1868, 3.9428, 4.5096, 1.7248], device='cuda:6'), covar=tensor([0.0721, 0.1065, 0.0903, 0.1021, 0.1157, 0.1524, 0.0655, 0.4990], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0246, 0.0277, 0.0295, 0.0335, 0.0283, 0.0299, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:52:51,899 INFO [finetune.py:976] (6/7) Epoch 16, batch 4400, loss[loss=0.1659, simple_loss=0.2294, pruned_loss=0.05119, over 4712.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2503, pruned_loss=0.05751, over 954751.37 frames. ], batch size: 23, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:52:53,191 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6732, 1.6519, 1.5001, 1.6134, 1.4649, 3.9303, 1.5591, 1.7936], device='cuda:6'), covar=tensor([0.3236, 0.2486, 0.2135, 0.2315, 0.1573, 0.0178, 0.2421, 0.1302], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0097, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:53:04,943 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:53:13,599 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.586e+02 1.845e+02 2.241e+02 4.760e+02, threshold=3.689e+02, percent-clipped=4.0 2023-03-26 19:53:15,484 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90351.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:53:25,407 INFO [finetune.py:976] (6/7) Epoch 16, batch 4450, loss[loss=0.2243, simple_loss=0.2952, pruned_loss=0.07667, over 4839.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2543, pruned_loss=0.05849, over 954626.31 frames. ], batch size: 47, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:53:32,083 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7292, 1.2208, 0.7830, 1.5220, 2.0956, 1.0743, 1.4087, 1.5276], device='cuda:6'), covar=tensor([0.1442, 0.2101, 0.1996, 0.1189, 0.1767, 0.1997, 0.1469, 0.2054], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0111, 0.0093, 0.0119, 0.0095, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 19:53:37,364 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90384.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:53:42,205 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5265, 1.3843, 1.5174, 0.7381, 1.5505, 1.5283, 1.4906, 1.3634], device='cuda:6'), covar=tensor([0.0626, 0.0842, 0.0739, 0.1059, 0.0909, 0.0767, 0.0655, 0.1225], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0135, 0.0142, 0.0125, 0.0124, 0.0141, 0.0143, 0.0165], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:54:05,566 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90412.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:54:07,882 INFO [finetune.py:976] (6/7) Epoch 16, batch 4500, loss[loss=0.2076, simple_loss=0.2824, pruned_loss=0.06641, over 4850.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.255, pruned_loss=0.05831, over 955132.07 frames. ], batch size: 44, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:54:15,034 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7978, 1.5019, 1.9769, 1.3196, 1.8360, 2.0091, 1.4281, 2.1200], device='cuda:6'), covar=tensor([0.1259, 0.2146, 0.1449, 0.1969, 0.0979, 0.1364, 0.2903, 0.0872], device='cuda:6'), in_proj_covar=tensor([0.0195, 0.0206, 0.0193, 0.0192, 0.0178, 0.0214, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:54:40,946 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.141e+02 1.659e+02 2.056e+02 2.631e+02 3.688e+02, threshold=4.111e+02, percent-clipped=0.0 2023-03-26 19:54:53,411 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90459.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:54:58,607 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 19:55:01,923 INFO [finetune.py:976] (6/7) Epoch 16, batch 4550, loss[loss=0.1632, simple_loss=0.2323, pruned_loss=0.04706, over 4848.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.256, pruned_loss=0.05837, over 957192.21 frames. ], batch size: 25, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:55:02,049 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9908, 1.8484, 2.0660, 1.4727, 2.0986, 2.1580, 2.1377, 1.6066], device='cuda:6'), covar=tensor([0.0634, 0.0761, 0.0745, 0.1001, 0.0747, 0.0722, 0.0608, 0.1264], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0135, 0.0142, 0.0124, 0.0123, 0.0141, 0.0142, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:55:18,094 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:24,691 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90496.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:38,625 INFO [finetune.py:976] (6/7) Epoch 16, batch 4600, loss[loss=0.1843, simple_loss=0.2524, pruned_loss=0.05809, over 4746.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.256, pruned_loss=0.05843, over 957783.45 frames. ], batch size: 59, lr: 3.43e-03, grad_scale: 32.0 2023-03-26 19:55:41,655 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90520.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:55:42,189 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90521.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 19:55:57,036 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3193, 1.4320, 1.4772, 0.7916, 1.5191, 1.7152, 1.7930, 1.3353], device='cuda:6'), covar=tensor([0.0902, 0.0575, 0.0505, 0.0530, 0.0432, 0.0577, 0.0283, 0.0655], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0151, 0.0124, 0.0127, 0.0131, 0.0129, 0.0142, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.2443e-05, 1.1008e-04, 8.8667e-05, 9.0744e-05, 9.2434e-05, 9.2771e-05, 1.0250e-04, 1.0656e-04], device='cuda:6') 2023-03-26 19:55:59,291 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.556e+01 1.469e+02 1.715e+02 2.012e+02 4.010e+02, threshold=3.429e+02, percent-clipped=0.0 2023-03-26 19:56:05,670 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6123, 1.5664, 1.5174, 1.5736, 1.1438, 3.1493, 1.3055, 1.7442], device='cuda:6'), covar=tensor([0.3087, 0.2247, 0.2036, 0.2131, 0.1700, 0.0211, 0.2551, 0.1159], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0097, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:56:06,289 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90557.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:56:11,542 INFO [finetune.py:976] (6/7) Epoch 16, batch 4650, loss[loss=0.1576, simple_loss=0.2239, pruned_loss=0.04566, over 4816.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2524, pruned_loss=0.0576, over 957788.63 frames. ], batch size: 40, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:56:14,404 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90569.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:56:26,170 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90588.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:56:45,054 INFO [finetune.py:976] (6/7) Epoch 16, batch 4700, loss[loss=0.1707, simple_loss=0.2388, pruned_loss=0.05135, over 4702.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2498, pruned_loss=0.0567, over 955634.38 frames. ], batch size: 23, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:57:05,663 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.593e+02 1.858e+02 2.163e+02 3.767e+02, threshold=3.717e+02, percent-clipped=1.0 2023-03-26 19:57:18,492 INFO [finetune.py:976] (6/7) Epoch 16, batch 4750, loss[loss=0.2164, simple_loss=0.2753, pruned_loss=0.07877, over 4927.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.247, pruned_loss=0.05603, over 954653.06 frames. ], batch size: 38, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:57:34,607 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6508, 3.4675, 3.3181, 1.5984, 3.6371, 2.6503, 0.9651, 2.4154], device='cuda:6'), covar=tensor([0.2280, 0.2212, 0.1760, 0.3606, 0.0968, 0.1096, 0.4246, 0.1759], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0175, 0.0158, 0.0128, 0.0157, 0.0122, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 19:57:45,642 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90707.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:57:52,459 INFO [finetune.py:976] (6/7) Epoch 16, batch 4800, loss[loss=0.1832, simple_loss=0.2362, pruned_loss=0.06515, over 4709.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2495, pruned_loss=0.05714, over 954492.80 frames. ], batch size: 23, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:58:00,692 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5233, 1.4483, 1.4222, 1.4842, 1.1517, 3.3812, 1.3382, 1.7827], device='cuda:6'), covar=tensor([0.3629, 0.2727, 0.2294, 0.2485, 0.1831, 0.0197, 0.2753, 0.1352], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0097, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 19:58:13,286 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.231e+02 1.629e+02 1.979e+02 2.321e+02 4.531e+02, threshold=3.957e+02, percent-clipped=1.0 2023-03-26 19:58:21,958 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.90 vs. limit=5.0 2023-03-26 19:58:25,071 INFO [finetune.py:976] (6/7) Epoch 16, batch 4850, loss[loss=0.234, simple_loss=0.2994, pruned_loss=0.08431, over 4061.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2524, pruned_loss=0.05772, over 952996.58 frames. ], batch size: 66, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:58:39,068 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90786.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:58:57,840 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90815.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:58:58,402 INFO [finetune.py:976] (6/7) Epoch 16, batch 4900, loss[loss=0.2335, simple_loss=0.2935, pruned_loss=0.0868, over 4155.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2539, pruned_loss=0.05812, over 952326.85 frames. ], batch size: 65, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:59:11,175 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90834.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:59:24,118 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.543e+02 1.926e+02 2.205e+02 3.945e+02, threshold=3.852e+02, percent-clipped=0.0 2023-03-26 19:59:25,330 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=90849.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:59:27,135 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=90852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 19:59:42,911 INFO [finetune.py:976] (6/7) Epoch 16, batch 4950, loss[loss=0.1769, simple_loss=0.2494, pruned_loss=0.05221, over 4900.00 frames. ], tot_loss[loss=0.1864, simple_loss=0.2555, pruned_loss=0.05868, over 952158.52 frames. ], batch size: 37, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 19:59:52,406 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8160, 1.4054, 2.0750, 1.2677, 1.8636, 1.9480, 1.3645, 2.0841], device='cuda:6'), covar=tensor([0.1435, 0.2180, 0.1230, 0.2044, 0.0962, 0.1595, 0.2951, 0.1019], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0205, 0.0192, 0.0191, 0.0176, 0.0213, 0.0218, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 19:59:55,304 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0815, 1.3530, 1.3387, 1.3119, 1.4605, 2.4422, 1.2791, 1.4352], device='cuda:6'), covar=tensor([0.1028, 0.1894, 0.1056, 0.0951, 0.1604, 0.0368, 0.1508, 0.1844], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0074, 0.0078, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:00:03,241 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6242, 1.5685, 2.0464, 2.7680, 1.9697, 2.1427, 1.4685, 2.2731], device='cuda:6'), covar=tensor([0.1423, 0.1201, 0.0951, 0.0595, 0.0740, 0.2017, 0.1244, 0.0564], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0165, 0.0101, 0.0137, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:00:12,905 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=90888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:00:22,426 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8372, 1.8876, 1.5568, 2.0113, 2.4300, 2.0831, 1.6977, 1.5545], device='cuda:6'), covar=tensor([0.2034, 0.1871, 0.1878, 0.1549, 0.1594, 0.1088, 0.2389, 0.1850], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0209, 0.0212, 0.0191, 0.0243, 0.0186, 0.0216, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:00:34,906 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=90910.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:00:38,892 INFO [finetune.py:976] (6/7) Epoch 16, batch 5000, loss[loss=0.1817, simple_loss=0.2558, pruned_loss=0.05382, over 4814.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2553, pruned_loss=0.05905, over 953709.81 frames. ], batch size: 30, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:00:53,037 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=90936.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:01:00,214 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.882e+01 1.531e+02 1.843e+02 2.134e+02 5.620e+02, threshold=3.687e+02, percent-clipped=2.0 2023-03-26 20:01:11,975 INFO [finetune.py:976] (6/7) Epoch 16, batch 5050, loss[loss=0.134, simple_loss=0.21, pruned_loss=0.02906, over 4866.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2526, pruned_loss=0.05819, over 955911.48 frames. ], batch size: 34, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:01:40,195 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91007.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:01:45,574 INFO [finetune.py:976] (6/7) Epoch 16, batch 5100, loss[loss=0.1475, simple_loss=0.2175, pruned_loss=0.03874, over 4712.00 frames. ], tot_loss[loss=0.18, simple_loss=0.248, pruned_loss=0.05598, over 954823.27 frames. ], batch size: 23, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:02:00,113 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 20:02:04,931 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6674, 1.5541, 2.0332, 3.2038, 2.2117, 2.2993, 1.1147, 2.6327], device='cuda:6'), covar=tensor([0.1728, 0.1486, 0.1289, 0.0669, 0.0816, 0.1421, 0.1824, 0.0528], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0165, 0.0101, 0.0137, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:02:07,770 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.471e+02 1.778e+02 2.096e+02 3.940e+02, threshold=3.556e+02, percent-clipped=2.0 2023-03-26 20:02:12,115 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91055.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:02:19,222 INFO [finetune.py:976] (6/7) Epoch 16, batch 5150, loss[loss=0.2377, simple_loss=0.3029, pruned_loss=0.08625, over 4907.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.247, pruned_loss=0.0554, over 953521.15 frames. ], batch size: 43, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:02:47,837 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-26 20:02:52,435 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91115.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:02:52,958 INFO [finetune.py:976] (6/7) Epoch 16, batch 5200, loss[loss=0.1783, simple_loss=0.2508, pruned_loss=0.05294, over 4787.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.251, pruned_loss=0.05705, over 952875.01 frames. ], batch size: 29, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:03:00,813 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0926, 1.2824, 1.3603, 1.2581, 1.4473, 2.5021, 1.1918, 1.4076], device='cuda:6'), covar=tensor([0.1099, 0.1927, 0.1088, 0.1078, 0.1786, 0.0359, 0.1657, 0.1910], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0078, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:03:06,305 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 20:03:14,342 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.594e+02 1.966e+02 2.465e+02 4.658e+02, threshold=3.932e+02, percent-clipped=3.0 2023-03-26 20:03:17,820 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91152.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:24,390 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91163.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:26,646 INFO [finetune.py:976] (6/7) Epoch 16, batch 5250, loss[loss=0.2478, simple_loss=0.3014, pruned_loss=0.0971, over 4903.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2539, pruned_loss=0.0579, over 953895.26 frames. ], batch size: 37, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:03:49,258 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91200.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:53,266 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91205.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:03:59,841 INFO [finetune.py:976] (6/7) Epoch 16, batch 5300, loss[loss=0.1727, simple_loss=0.2444, pruned_loss=0.05055, over 4712.00 frames. ], tot_loss[loss=0.1862, simple_loss=0.2556, pruned_loss=0.05841, over 955041.99 frames. ], batch size: 54, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:04:21,113 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.597e+02 1.843e+02 2.222e+02 3.769e+02, threshold=3.686e+02, percent-clipped=0.0 2023-03-26 20:04:26,656 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.56 vs. limit=5.0 2023-03-26 20:04:33,501 INFO [finetune.py:976] (6/7) Epoch 16, batch 5350, loss[loss=0.156, simple_loss=0.2232, pruned_loss=0.04436, over 4762.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2553, pruned_loss=0.05835, over 952347.40 frames. ], batch size: 28, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:04:50,646 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91285.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:05:08,494 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 20:05:29,810 INFO [finetune.py:976] (6/7) Epoch 16, batch 5400, loss[loss=0.1416, simple_loss=0.2296, pruned_loss=0.02677, over 4814.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2538, pruned_loss=0.05817, over 952929.21 frames. ], batch size: 38, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:05:58,705 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4817, 1.3707, 1.3571, 1.3616, 0.7558, 2.2653, 0.7235, 1.2838], device='cuda:6'), covar=tensor([0.3247, 0.2511, 0.2251, 0.2468, 0.2088, 0.0348, 0.2709, 0.1361], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0114, 0.0097, 0.0097, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:06:02,277 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:06:03,348 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.781e+01 1.583e+02 1.812e+02 2.291e+02 3.767e+02, threshold=3.624e+02, percent-clipped=1.0 2023-03-26 20:06:03,945 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3550, 1.9979, 2.5928, 4.1281, 2.9464, 2.8243, 1.0706, 3.4139], device='cuda:6'), covar=tensor([0.1457, 0.1327, 0.1250, 0.0549, 0.0674, 0.1369, 0.1781, 0.0409], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0165, 0.0101, 0.0138, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:06:05,152 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8474, 3.3334, 3.4678, 3.6952, 3.6466, 3.3621, 3.9047, 1.1748], device='cuda:6'), covar=tensor([0.0888, 0.0961, 0.1028, 0.1116, 0.1284, 0.1709, 0.0859, 0.5523], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0243, 0.0274, 0.0293, 0.0332, 0.0279, 0.0298, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:06:15,750 INFO [finetune.py:976] (6/7) Epoch 16, batch 5450, loss[loss=0.1984, simple_loss=0.2594, pruned_loss=0.06869, over 4022.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2507, pruned_loss=0.05721, over 951666.05 frames. ], batch size: 17, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:06:26,190 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-26 20:06:49,418 INFO [finetune.py:976] (6/7) Epoch 16, batch 5500, loss[loss=0.1411, simple_loss=0.2207, pruned_loss=0.03074, over 4768.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2472, pruned_loss=0.05559, over 954181.44 frames. ], batch size: 26, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:07:07,349 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9587, 1.9401, 1.5709, 1.7099, 1.9383, 1.6582, 2.1528, 1.9275], device='cuda:6'), covar=tensor([0.1305, 0.1936, 0.2940, 0.2516, 0.2527, 0.1675, 0.3115, 0.1674], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0188, 0.0234, 0.0254, 0.0245, 0.0202, 0.0212, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:07:10,224 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.789e+01 1.460e+02 1.744e+02 2.187e+02 6.443e+02, threshold=3.488e+02, percent-clipped=1.0 2023-03-26 20:07:11,594 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1886, 1.8568, 2.2283, 2.1451, 1.9019, 1.9199, 2.1647, 1.9968], device='cuda:6'), covar=tensor([0.4292, 0.4069, 0.3291, 0.4083, 0.4990, 0.3815, 0.4782, 0.3226], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0240, 0.0259, 0.0272, 0.0270, 0.0245, 0.0282, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:07:22,094 INFO [finetune.py:976] (6/7) Epoch 16, batch 5550, loss[loss=0.2446, simple_loss=0.3089, pruned_loss=0.09017, over 4843.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2491, pruned_loss=0.05639, over 954930.08 frames. ], batch size: 49, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:07:47,587 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91505.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:07:53,902 INFO [finetune.py:976] (6/7) Epoch 16, batch 5600, loss[loss=0.2009, simple_loss=0.2741, pruned_loss=0.06379, over 4855.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2523, pruned_loss=0.05708, over 953629.92 frames. ], batch size: 31, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:08:08,984 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91542.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:08:13,222 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.603e+02 1.969e+02 2.458e+02 5.397e+02, threshold=3.938e+02, percent-clipped=5.0 2023-03-26 20:08:16,176 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91553.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:08:23,633 INFO [finetune.py:976] (6/7) Epoch 16, batch 5650, loss[loss=0.1734, simple_loss=0.2595, pruned_loss=0.04367, over 4771.00 frames. ], tot_loss[loss=0.1863, simple_loss=0.256, pruned_loss=0.05832, over 952713.97 frames. ], batch size: 54, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:08:34,791 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8379, 1.3789, 1.0606, 1.7960, 2.0984, 1.7700, 1.6272, 1.7753], device='cuda:6'), covar=tensor([0.1420, 0.1929, 0.1990, 0.1063, 0.1789, 0.2089, 0.1263, 0.1739], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0112, 0.0092, 0.0119, 0.0095, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:08:43,136 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4496, 1.3692, 1.5046, 0.7395, 1.4776, 1.5171, 1.4563, 1.3344], device='cuda:6'), covar=tensor([0.0484, 0.0711, 0.0610, 0.0882, 0.0846, 0.0634, 0.0566, 0.1121], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0134, 0.0141, 0.0124, 0.0124, 0.0140, 0.0142, 0.0165], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:08:45,671 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91603.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:08:52,098 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0894, 2.0244, 1.6451, 1.6271, 2.0619, 1.8128, 2.3638, 2.0450], device='cuda:6'), covar=tensor([0.1434, 0.1995, 0.3472, 0.2854, 0.2612, 0.1849, 0.1951, 0.1965], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0188, 0.0234, 0.0253, 0.0245, 0.0202, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:08:52,365 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 20:08:53,176 INFO [finetune.py:976] (6/7) Epoch 16, batch 5700, loss[loss=0.1536, simple_loss=0.2013, pruned_loss=0.053, over 3874.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2522, pruned_loss=0.05783, over 934092.26 frames. ], batch size: 16, lr: 3.42e-03, grad_scale: 32.0 2023-03-26 20:09:07,900 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91641.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:09:21,370 INFO [finetune.py:976] (6/7) Epoch 17, batch 0, loss[loss=0.2159, simple_loss=0.2829, pruned_loss=0.07442, over 4890.00 frames. ], tot_loss[loss=0.2159, simple_loss=0.2829, pruned_loss=0.07442, over 4890.00 frames. ], batch size: 37, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:09:21,370 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 20:09:23,585 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8913, 1.1472, 2.0138, 1.8809, 1.7896, 1.7195, 1.7646, 1.9257], device='cuda:6'), covar=tensor([0.4529, 0.4380, 0.3962, 0.4026, 0.5198, 0.4126, 0.4885, 0.3336], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0240, 0.0259, 0.0271, 0.0270, 0.0245, 0.0282, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:09:23,647 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9004, 3.4624, 3.5836, 3.7595, 3.6747, 3.4284, 3.9782, 1.3474], device='cuda:6'), covar=tensor([0.0899, 0.0872, 0.0924, 0.1049, 0.1433, 0.1708, 0.0801, 0.5121], device='cuda:6'), in_proj_covar=tensor([0.0345, 0.0241, 0.0272, 0.0290, 0.0330, 0.0278, 0.0296, 0.0291], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:09:24,166 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5130, 1.2994, 1.3572, 1.4885, 1.7523, 1.6274, 1.4362, 1.2540], device='cuda:6'), covar=tensor([0.0362, 0.0306, 0.0613, 0.0329, 0.0228, 0.0459, 0.0334, 0.0401], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0108, 0.0144, 0.0113, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3808e-05, 8.3862e-05, 1.1375e-04, 8.6997e-05, 7.7713e-05, 7.9723e-05, 7.3573e-05, 8.3274e-05], device='cuda:6') 2023-03-26 20:09:32,014 INFO [finetune.py:1010] (6/7) Epoch 17, validation: loss=0.1591, simple_loss=0.2283, pruned_loss=0.04492, over 2265189.00 frames. 2023-03-26 20:09:32,014 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 20:09:35,492 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.774e+01 1.479e+02 1.757e+02 2.057e+02 5.096e+02, threshold=3.514e+02, percent-clipped=1.0 2023-03-26 20:10:00,086 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-26 20:10:07,337 INFO [finetune.py:976] (6/7) Epoch 17, batch 50, loss[loss=0.1854, simple_loss=0.2572, pruned_loss=0.05682, over 4897.00 frames. ], tot_loss[loss=0.1876, simple_loss=0.2573, pruned_loss=0.05897, over 216490.98 frames. ], batch size: 36, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:10:16,239 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.6242, 3.2968, 2.8128, 1.3688, 3.0126, 2.5417, 2.4672, 2.7848], device='cuda:6'), covar=tensor([0.0743, 0.0813, 0.1666, 0.2399, 0.1474, 0.2041, 0.2052, 0.1182], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0194, 0.0198, 0.0181, 0.0211, 0.0205, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:10:23,548 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-26 20:10:32,213 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6530, 1.3158, 2.1419, 3.3291, 2.2406, 2.5517, 0.9345, 2.8221], device='cuda:6'), covar=tensor([0.2086, 0.2176, 0.1704, 0.0947, 0.1028, 0.1504, 0.2283, 0.0652], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0164, 0.0100, 0.0137, 0.0123, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:10:52,790 INFO [finetune.py:976] (6/7) Epoch 17, batch 100, loss[loss=0.1408, simple_loss=0.2166, pruned_loss=0.03253, over 4766.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2497, pruned_loss=0.05758, over 381869.40 frames. ], batch size: 27, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:11:01,250 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.600e+02 1.810e+02 2.096e+02 3.529e+02, threshold=3.620e+02, percent-clipped=1.0 2023-03-26 20:11:09,374 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 20:11:37,626 INFO [finetune.py:976] (6/7) Epoch 17, batch 150, loss[loss=0.1572, simple_loss=0.2259, pruned_loss=0.04423, over 4675.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2453, pruned_loss=0.05552, over 508689.58 frames. ], batch size: 23, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:12:11,013 INFO [finetune.py:976] (6/7) Epoch 17, batch 200, loss[loss=0.2042, simple_loss=0.2566, pruned_loss=0.07588, over 4193.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2449, pruned_loss=0.05545, over 608301.21 frames. ], batch size: 65, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:12:11,700 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1382, 1.2981, 1.2589, 1.3695, 1.3822, 2.4069, 1.1744, 1.3630], device='cuda:6'), covar=tensor([0.0940, 0.1730, 0.1284, 0.0858, 0.1564, 0.0415, 0.1521, 0.1776], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:12:14,525 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.595e+02 1.938e+02 2.273e+02 4.627e+02, threshold=3.876e+02, percent-clipped=4.0 2023-03-26 20:12:43,715 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91891.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:12:44,794 INFO [finetune.py:976] (6/7) Epoch 17, batch 250, loss[loss=0.2169, simple_loss=0.2701, pruned_loss=0.08183, over 4896.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2476, pruned_loss=0.05652, over 685307.16 frames. ], batch size: 32, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:12:48,432 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=91898.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:12:50,144 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2673, 2.2415, 2.0197, 2.0414, 2.8201, 2.6939, 2.2717, 2.3292], device='cuda:6'), covar=tensor([0.0374, 0.0319, 0.0507, 0.0319, 0.0200, 0.0440, 0.0272, 0.0320], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0108, 0.0144, 0.0113, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.4044e-05, 8.3918e-05, 1.1392e-04, 8.7083e-05, 7.7791e-05, 7.9603e-05, 7.3818e-05, 8.3000e-05], device='cuda:6') 2023-03-26 20:13:14,610 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9334, 1.8355, 2.0067, 1.4129, 2.0775, 2.1074, 2.1058, 1.3743], device='cuda:6'), covar=tensor([0.0711, 0.0811, 0.0702, 0.1037, 0.0687, 0.0760, 0.0682, 0.1724], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0141, 0.0124, 0.0124, 0.0140, 0.0141, 0.0165], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:13:17,060 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=91941.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:18,231 INFO [finetune.py:976] (6/7) Epoch 17, batch 300, loss[loss=0.1969, simple_loss=0.2662, pruned_loss=0.06381, over 4737.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2501, pruned_loss=0.05657, over 744950.96 frames. ], batch size: 59, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:13:21,758 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.873e+01 1.605e+02 2.003e+02 2.239e+02 3.510e+02, threshold=4.006e+02, percent-clipped=0.0 2023-03-26 20:13:21,983 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-26 20:13:24,424 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=91952.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:26,508 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8249, 4.0049, 3.7827, 1.9919, 4.0771, 3.1704, 0.9254, 2.8684], device='cuda:6'), covar=tensor([0.2171, 0.2446, 0.1562, 0.3438, 0.1121, 0.0966, 0.4671, 0.1655], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0174, 0.0159, 0.0128, 0.0158, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:13:27,185 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91955.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:33,619 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7185, 1.6465, 1.6091, 1.6077, 1.1324, 3.0289, 1.2685, 1.7320], device='cuda:6'), covar=tensor([0.3123, 0.2309, 0.2015, 0.2368, 0.1814, 0.0282, 0.2471, 0.1198], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:13:40,215 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6077, 1.6099, 1.3253, 1.6815, 2.0595, 1.8621, 1.6151, 1.4356], device='cuda:6'), covar=tensor([0.0339, 0.0327, 0.0627, 0.0316, 0.0187, 0.0546, 0.0347, 0.0396], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0108, 0.0144, 0.0113, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.4055e-05, 8.3752e-05, 1.1391e-04, 8.7247e-05, 7.7784e-05, 7.9726e-05, 7.3843e-05, 8.2929e-05], device='cuda:6') 2023-03-26 20:13:42,112 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.50 vs. limit=5.0 2023-03-26 20:13:44,460 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=91981.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:13:49,289 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=91989.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:13:51,722 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-26 20:13:52,143 INFO [finetune.py:976] (6/7) Epoch 17, batch 350, loss[loss=0.2191, simple_loss=0.2861, pruned_loss=0.07607, over 4174.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2528, pruned_loss=0.05781, over 789893.97 frames. ], batch size: 65, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:14:09,834 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92016.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:14:19,917 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3187, 2.9241, 2.7428, 1.3237, 3.0266, 2.2574, 0.7737, 1.8510], device='cuda:6'), covar=tensor([0.2294, 0.2025, 0.1904, 0.3428, 0.1257, 0.1139, 0.4299, 0.1736], device='cuda:6'), in_proj_covar=tensor([0.0148, 0.0172, 0.0157, 0.0127, 0.0156, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:14:26,179 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92042.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:14:26,649 INFO [finetune.py:976] (6/7) Epoch 17, batch 400, loss[loss=0.1697, simple_loss=0.2323, pruned_loss=0.05351, over 4692.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2531, pruned_loss=0.05705, over 826415.98 frames. ], batch size: 23, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:14:30,188 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.886e+01 1.544e+02 1.847e+02 2.163e+02 3.487e+02, threshold=3.695e+02, percent-clipped=0.0 2023-03-26 20:14:37,411 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-26 20:15:00,227 INFO [finetune.py:976] (6/7) Epoch 17, batch 450, loss[loss=0.1884, simple_loss=0.2587, pruned_loss=0.05901, over 4728.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2523, pruned_loss=0.05699, over 853964.39 frames. ], batch size: 59, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:15:33,736 INFO [finetune.py:976] (6/7) Epoch 17, batch 500, loss[loss=0.18, simple_loss=0.2491, pruned_loss=0.0555, over 4898.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2505, pruned_loss=0.05651, over 877360.44 frames. ], batch size: 36, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:15:37,218 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.578e+02 1.863e+02 2.269e+02 4.074e+02, threshold=3.727e+02, percent-clipped=2.0 2023-03-26 20:16:01,712 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9431, 1.8037, 1.6179, 1.4392, 1.9768, 1.6848, 1.8871, 1.9098], device='cuda:6'), covar=tensor([0.1356, 0.1924, 0.3057, 0.2397, 0.2564, 0.1674, 0.2833, 0.1655], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0186, 0.0233, 0.0250, 0.0242, 0.0200, 0.0211, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:16:05,180 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4756, 1.3694, 1.8462, 2.8961, 1.8690, 2.2039, 0.9329, 2.4341], device='cuda:6'), covar=tensor([0.1609, 0.1431, 0.1201, 0.0696, 0.0932, 0.1371, 0.1639, 0.0523], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0165, 0.0102, 0.0138, 0.0124, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:16:30,566 INFO [finetune.py:976] (6/7) Epoch 17, batch 550, loss[loss=0.1645, simple_loss=0.246, pruned_loss=0.04153, over 4806.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.248, pruned_loss=0.05634, over 893696.88 frames. ], batch size: 45, lr: 3.41e-03, grad_scale: 32.0 2023-03-26 20:16:33,717 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92198.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:17:13,334 INFO [finetune.py:976] (6/7) Epoch 17, batch 600, loss[loss=0.2133, simple_loss=0.2821, pruned_loss=0.07228, over 4831.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2492, pruned_loss=0.05719, over 907413.74 frames. ], batch size: 49, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:17:15,214 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92246.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:17:15,825 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92247.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:17:16,353 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.142e+02 1.709e+02 1.980e+02 2.349e+02 5.069e+02, threshold=3.960e+02, percent-clipped=5.0 2023-03-26 20:17:28,308 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2252, 2.0808, 1.6275, 2.0305, 2.1258, 1.7932, 2.4103, 2.2185], device='cuda:6'), covar=tensor([0.1270, 0.2137, 0.3034, 0.2750, 0.2522, 0.1677, 0.3525, 0.1628], device='cuda:6'), in_proj_covar=tensor([0.0181, 0.0186, 0.0233, 0.0251, 0.0243, 0.0200, 0.0211, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:17:39,477 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9575, 1.9178, 1.5019, 1.7029, 1.8335, 1.7553, 1.7835, 2.5449], device='cuda:6'), covar=tensor([0.4181, 0.4423, 0.3567, 0.4416, 0.4239, 0.2647, 0.4236, 0.1922], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0261, 0.0227, 0.0277, 0.0250, 0.0217, 0.0249, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:17:47,091 INFO [finetune.py:976] (6/7) Epoch 17, batch 650, loss[loss=0.168, simple_loss=0.2535, pruned_loss=0.04121, over 4788.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2541, pruned_loss=0.05874, over 917948.24 frames. ], batch size: 29, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:17:52,713 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6676, 1.6161, 2.0965, 3.5949, 2.3602, 2.5925, 1.1901, 2.8654], device='cuda:6'), covar=tensor([0.1883, 0.1553, 0.1578, 0.0620, 0.0867, 0.1236, 0.1864, 0.0586], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0164, 0.0101, 0.0136, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:17:58,669 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92311.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:18:16,768 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92337.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 20:18:20,699 INFO [finetune.py:976] (6/7) Epoch 17, batch 700, loss[loss=0.1754, simple_loss=0.2496, pruned_loss=0.05057, over 4778.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.255, pruned_loss=0.05814, over 926394.62 frames. ], batch size: 29, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:18:23,723 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.116e+02 1.553e+02 1.844e+02 2.135e+02 3.970e+02, threshold=3.688e+02, percent-clipped=1.0 2023-03-26 20:18:52,754 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 20:18:54,353 INFO [finetune.py:976] (6/7) Epoch 17, batch 750, loss[loss=0.2015, simple_loss=0.269, pruned_loss=0.06698, over 4744.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2556, pruned_loss=0.05775, over 934203.21 frames. ], batch size: 54, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:19:28,150 INFO [finetune.py:976] (6/7) Epoch 17, batch 800, loss[loss=0.1863, simple_loss=0.2501, pruned_loss=0.06128, over 4918.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2545, pruned_loss=0.05719, over 940874.16 frames. ], batch size: 33, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:19:31,196 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.753e+02 1.963e+02 2.342e+02 4.288e+02, threshold=3.926e+02, percent-clipped=2.0 2023-03-26 20:19:32,540 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2475, 1.3196, 1.5832, 1.0215, 1.2514, 1.4557, 1.3219, 1.6517], device='cuda:6'), covar=tensor([0.1234, 0.1993, 0.1223, 0.1546, 0.0937, 0.1243, 0.2623, 0.0718], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0203, 0.0190, 0.0189, 0.0176, 0.0213, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:19:33,289 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 20:19:42,769 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4560, 2.4716, 1.8985, 2.5480, 2.4418, 2.0479, 2.8847, 2.4813], device='cuda:6'), covar=tensor([0.1370, 0.2277, 0.3030, 0.2739, 0.2458, 0.1636, 0.3434, 0.1654], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0187, 0.0234, 0.0253, 0.0244, 0.0201, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:19:51,708 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-26 20:20:01,479 INFO [finetune.py:976] (6/7) Epoch 17, batch 850, loss[loss=0.1656, simple_loss=0.2402, pruned_loss=0.04547, over 4726.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2531, pruned_loss=0.0569, over 940473.64 frames. ], batch size: 54, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:20:03,324 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4962, 1.3758, 1.4061, 1.3202, 0.8892, 2.2552, 0.7566, 1.1900], device='cuda:6'), covar=tensor([0.3389, 0.2602, 0.2272, 0.2559, 0.1906, 0.0393, 0.2686, 0.1429], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:20:09,340 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7972, 3.9731, 3.7110, 1.8783, 4.0392, 3.1834, 1.0142, 2.8429], device='cuda:6'), covar=tensor([0.2325, 0.1817, 0.1580, 0.3446, 0.1010, 0.0965, 0.4463, 0.1503], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0173, 0.0158, 0.0128, 0.0156, 0.0122, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:20:18,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6052, 1.6051, 2.2260, 1.7397, 1.8966, 4.2168, 1.6139, 1.8455], device='cuda:6'), covar=tensor([0.0948, 0.1760, 0.1105, 0.0986, 0.1416, 0.0184, 0.1450, 0.1625], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0078, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:20:35,312 INFO [finetune.py:976] (6/7) Epoch 17, batch 900, loss[loss=0.143, simple_loss=0.2055, pruned_loss=0.04024, over 4917.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2507, pruned_loss=0.05656, over 943763.65 frames. ], batch size: 32, lr: 3.41e-03, grad_scale: 64.0 2023-03-26 20:20:38,325 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:20:38,821 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.037e+02 1.480e+02 1.791e+02 2.296e+02 4.324e+02, threshold=3.582e+02, percent-clipped=2.0 2023-03-26 20:20:46,210 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8367, 1.0116, 1.8038, 1.7967, 1.5977, 1.5346, 1.6526, 1.7014], device='cuda:6'), covar=tensor([0.3217, 0.3566, 0.3043, 0.3024, 0.4029, 0.3278, 0.3735, 0.2848], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0239, 0.0257, 0.0271, 0.0269, 0.0244, 0.0281, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:21:06,946 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1710, 1.9567, 2.8613, 1.5605, 2.2601, 2.5245, 1.7744, 2.6150], device='cuda:6'), covar=tensor([0.1637, 0.2211, 0.1503, 0.2311, 0.1191, 0.1597, 0.2889, 0.1165], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0204, 0.0190, 0.0189, 0.0176, 0.0213, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:21:15,096 INFO [finetune.py:976] (6/7) Epoch 17, batch 950, loss[loss=0.2323, simple_loss=0.3031, pruned_loss=0.08079, over 4734.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2496, pruned_loss=0.05618, over 944875.08 frames. ], batch size: 59, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:21:16,912 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92595.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:21:37,232 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92611.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:21:54,768 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92625.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:22:05,398 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=92637.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:22:13,800 INFO [finetune.py:976] (6/7) Epoch 17, batch 1000, loss[loss=0.2318, simple_loss=0.3146, pruned_loss=0.07454, over 4803.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2515, pruned_loss=0.05708, over 943813.93 frames. ], batch size: 51, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:22:20,426 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.492e+01 1.711e+02 2.074e+02 2.603e+02 6.251e+02, threshold=4.148e+02, percent-clipped=4.0 2023-03-26 20:22:27,818 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92659.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:22:45,213 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=92685.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:22:45,835 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92686.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:22:50,929 INFO [finetune.py:976] (6/7) Epoch 17, batch 1050, loss[loss=0.1963, simple_loss=0.2637, pruned_loss=0.06449, over 4912.00 frames. ], tot_loss[loss=0.1856, simple_loss=0.2551, pruned_loss=0.05807, over 948626.25 frames. ], batch size: 43, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:23:08,348 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:23:23,714 INFO [finetune.py:976] (6/7) Epoch 17, batch 1100, loss[loss=0.1707, simple_loss=0.2483, pruned_loss=0.04658, over 4809.00 frames. ], tot_loss[loss=0.1859, simple_loss=0.2556, pruned_loss=0.0581, over 948701.30 frames. ], batch size: 40, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:23:27,195 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.694e+02 2.013e+02 2.338e+02 4.806e+02, threshold=4.026e+02, percent-clipped=2.0 2023-03-26 20:23:48,935 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:23:57,177 INFO [finetune.py:976] (6/7) Epoch 17, batch 1150, loss[loss=0.2317, simple_loss=0.3023, pruned_loss=0.08051, over 4231.00 frames. ], tot_loss[loss=0.1851, simple_loss=0.2551, pruned_loss=0.0576, over 950819.53 frames. ], batch size: 65, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:23:57,612 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.78 vs. limit=2.0 2023-03-26 20:24:22,956 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92831.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:24:31,079 INFO [finetune.py:976] (6/7) Epoch 17, batch 1200, loss[loss=0.1621, simple_loss=0.2321, pruned_loss=0.04609, over 4892.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2521, pruned_loss=0.0565, over 949608.70 frames. ], batch size: 35, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:24:34,572 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.909e+01 1.547e+02 1.742e+02 2.125e+02 5.044e+02, threshold=3.483e+02, percent-clipped=2.0 2023-03-26 20:25:04,212 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92892.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:04,688 INFO [finetune.py:976] (6/7) Epoch 17, batch 1250, loss[loss=0.2394, simple_loss=0.2822, pruned_loss=0.09828, over 4885.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2501, pruned_loss=0.05603, over 952378.33 frames. ], batch size: 35, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:25:08,903 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:13,610 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4020, 1.3513, 1.6519, 1.6835, 1.5095, 3.1953, 1.3406, 1.5028], device='cuda:6'), covar=tensor([0.0985, 0.1895, 0.1164, 0.0982, 0.1624, 0.0252, 0.1544, 0.1741], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0078, 0.0092, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:25:29,744 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92931.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:37,459 INFO [finetune.py:976] (6/7) Epoch 17, batch 1300, loss[loss=0.1876, simple_loss=0.2415, pruned_loss=0.0669, over 4901.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.248, pruned_loss=0.0556, over 953740.17 frames. ], batch size: 32, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:25:41,345 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.503e+02 1.790e+02 2.154e+02 4.064e+02, threshold=3.581e+02, percent-clipped=1.0 2023-03-26 20:25:49,702 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92960.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:25:56,832 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7032, 1.5608, 1.4264, 1.7495, 1.9684, 1.7314, 1.3519, 1.4114], device='cuda:6'), covar=tensor([0.2065, 0.1968, 0.1907, 0.1523, 0.1607, 0.1205, 0.2457, 0.1805], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0207, 0.0211, 0.0190, 0.0241, 0.0184, 0.0214, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:25:59,955 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=92975.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:03,543 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=92981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:10,759 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=92992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:11,261 INFO [finetune.py:976] (6/7) Epoch 17, batch 1350, loss[loss=0.2345, simple_loss=0.2849, pruned_loss=0.09203, over 4925.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2497, pruned_loss=0.05728, over 954901.98 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 64.0 2023-03-26 20:26:23,862 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93010.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:23,890 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9881, 1.8361, 1.5402, 1.6517, 1.7556, 1.7318, 1.7697, 2.4595], device='cuda:6'), covar=tensor([0.3693, 0.3794, 0.3176, 0.3537, 0.3757, 0.2182, 0.3253, 0.1590], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0225, 0.0276, 0.0249, 0.0217, 0.0249, 0.0229], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:26:51,528 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:26:51,540 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:27:00,928 INFO [finetune.py:976] (6/7) Epoch 17, batch 1400, loss[loss=0.1409, simple_loss=0.2152, pruned_loss=0.03332, over 4733.00 frames. ], tot_loss[loss=0.1842, simple_loss=0.2532, pruned_loss=0.05757, over 956115.11 frames. ], batch size: 23, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:27:03,491 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9438, 1.7296, 2.4156, 3.5540, 2.4779, 2.6542, 1.4140, 2.9529], device='cuda:6'), covar=tensor([0.1630, 0.1418, 0.1194, 0.0548, 0.0764, 0.1235, 0.1670, 0.0522], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0165, 0.0101, 0.0137, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:27:08,968 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.549e+02 1.883e+02 2.310e+02 4.523e+02, threshold=3.767e+02, percent-clipped=3.0 2023-03-26 20:27:29,906 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5232, 1.0646, 0.7643, 1.3622, 1.9825, 0.6966, 1.2879, 1.3350], device='cuda:6'), covar=tensor([0.1680, 0.2367, 0.1841, 0.1362, 0.1934, 0.2111, 0.1569, 0.2129], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:27:36,781 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93071.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:27:39,786 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:27:49,576 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6949, 1.2145, 0.7673, 1.5084, 2.0643, 0.9852, 1.4557, 1.5195], device='cuda:6'), covar=tensor([0.1480, 0.2098, 0.1978, 0.1224, 0.1885, 0.2012, 0.1379, 0.1918], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:27:50,101 INFO [finetune.py:976] (6/7) Epoch 17, batch 1450, loss[loss=0.2254, simple_loss=0.2976, pruned_loss=0.07661, over 4807.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2546, pruned_loss=0.05791, over 954183.72 frames. ], batch size: 40, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:27:53,151 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93097.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:27:56,079 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8552, 2.4571, 2.7451, 2.6621, 2.4340, 2.4314, 2.5689, 2.5362], device='cuda:6'), covar=tensor([0.3592, 0.3837, 0.2994, 0.3647, 0.4999, 0.3638, 0.4698, 0.2923], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0240, 0.0258, 0.0272, 0.0270, 0.0245, 0.0283, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:28:05,994 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93115.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:28:20,229 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1023, 1.6635, 2.4891, 4.1525, 2.8216, 2.7066, 0.9665, 3.4690], device='cuda:6'), covar=tensor([0.1695, 0.1569, 0.1398, 0.0513, 0.0791, 0.1684, 0.1993, 0.0377], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0165, 0.0101, 0.0137, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:28:23,795 INFO [finetune.py:976] (6/7) Epoch 17, batch 1500, loss[loss=0.1986, simple_loss=0.2621, pruned_loss=0.06761, over 4918.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2546, pruned_loss=0.05747, over 954385.40 frames. ], batch size: 38, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:28:27,863 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.651e+02 1.993e+02 2.270e+02 5.642e+02, threshold=3.987e+02, percent-clipped=1.0 2023-03-26 20:28:47,151 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93176.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:28:53,716 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93187.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:28:57,278 INFO [finetune.py:976] (6/7) Epoch 17, batch 1550, loss[loss=0.1909, simple_loss=0.2596, pruned_loss=0.06109, over 4809.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2544, pruned_loss=0.05686, over 955529.15 frames. ], batch size: 41, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:29:30,933 INFO [finetune.py:976] (6/7) Epoch 17, batch 1600, loss[loss=0.1558, simple_loss=0.231, pruned_loss=0.04034, over 4824.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2522, pruned_loss=0.05669, over 953402.76 frames. ], batch size: 30, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:29:34,595 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.059e+02 1.546e+02 1.807e+02 2.216e+02 3.989e+02, threshold=3.613e+02, percent-clipped=1.0 2023-03-26 20:29:38,728 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93255.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:29:45,559 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-26 20:29:57,409 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:01,034 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93287.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:04,616 INFO [finetune.py:976] (6/7) Epoch 17, batch 1650, loss[loss=0.1705, simple_loss=0.2256, pruned_loss=0.05765, over 4017.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2499, pruned_loss=0.05628, over 953797.20 frames. ], batch size: 17, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:30:06,580 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3806, 2.5522, 2.3886, 1.8301, 2.4362, 2.7117, 2.7304, 2.1749], device='cuda:6'), covar=tensor([0.0710, 0.0604, 0.0769, 0.0881, 0.0899, 0.0739, 0.0604, 0.1121], device='cuda:6'), in_proj_covar=tensor([0.0135, 0.0136, 0.0144, 0.0124, 0.0125, 0.0142, 0.0143, 0.0166], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:30:10,840 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4712, 1.3919, 1.3395, 1.4412, 0.8690, 2.2310, 0.7713, 1.1895], device='cuda:6'), covar=tensor([0.3402, 0.2542, 0.2266, 0.2415, 0.1876, 0.0342, 0.2636, 0.1411], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:30:19,620 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9566, 1.7969, 1.6991, 2.0966, 2.3800, 2.1313, 1.7769, 1.6301], device='cuda:6'), covar=tensor([0.2047, 0.2016, 0.1805, 0.1464, 0.1692, 0.1094, 0.2264, 0.1886], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0208, 0.0212, 0.0191, 0.0242, 0.0185, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:30:28,980 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:30,702 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93331.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:30:38,310 INFO [finetune.py:976] (6/7) Epoch 17, batch 1700, loss[loss=0.2141, simple_loss=0.274, pruned_loss=0.0771, over 4904.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.247, pruned_loss=0.0551, over 955296.43 frames. ], batch size: 35, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:30:41,942 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.487e+02 1.694e+02 2.142e+02 3.933e+02, threshold=3.388e+02, percent-clipped=2.0 2023-03-26 20:30:53,820 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93366.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:31:00,378 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93376.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:31:11,627 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93392.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:31:12,182 INFO [finetune.py:976] (6/7) Epoch 17, batch 1750, loss[loss=0.2367, simple_loss=0.3079, pruned_loss=0.08281, over 4941.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2508, pruned_loss=0.05747, over 955634.17 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:31:23,923 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3236, 1.2320, 1.1565, 1.3782, 1.6052, 1.4467, 1.2883, 1.1254], device='cuda:6'), covar=tensor([0.0305, 0.0335, 0.0605, 0.0309, 0.0195, 0.0432, 0.0353, 0.0364], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0108, 0.0143, 0.0112, 0.0099, 0.0107, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3349e-05, 8.3688e-05, 1.1285e-04, 8.6869e-05, 7.7223e-05, 7.9181e-05, 7.3554e-05, 8.3145e-05], device='cuda:6') 2023-03-26 20:31:33,284 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93424.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:31:48,778 INFO [finetune.py:976] (6/7) Epoch 17, batch 1800, loss[loss=0.2132, simple_loss=0.2718, pruned_loss=0.07734, over 4798.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2536, pruned_loss=0.05796, over 954837.42 frames. ], batch size: 51, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:31:56,893 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.552e+02 1.846e+02 2.179e+02 3.576e+02, threshold=3.692e+02, percent-clipped=3.0 2023-03-26 20:32:00,641 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5853, 1.4496, 1.4459, 1.5032, 1.0448, 3.0098, 1.0910, 1.5264], device='cuda:6'), covar=tensor([0.3285, 0.2483, 0.2205, 0.2482, 0.1852, 0.0242, 0.2619, 0.1357], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:32:20,903 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=93471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:32:38,198 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.28 vs. limit=5.0 2023-03-26 20:32:41,045 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93487.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:32:49,592 INFO [finetune.py:976] (6/7) Epoch 17, batch 1850, loss[loss=0.2148, simple_loss=0.2978, pruned_loss=0.06584, over 4818.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2561, pruned_loss=0.05913, over 953945.91 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:33:20,425 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93535.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:33:26,143 INFO [finetune.py:976] (6/7) Epoch 17, batch 1900, loss[loss=0.1979, simple_loss=0.269, pruned_loss=0.06336, over 4903.00 frames. ], tot_loss[loss=0.1872, simple_loss=0.2567, pruned_loss=0.05891, over 954959.87 frames. ], batch size: 37, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:33:30,349 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.241e+02 1.618e+02 1.925e+02 2.327e+02 3.543e+02, threshold=3.851e+02, percent-clipped=0.0 2023-03-26 20:33:34,115 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:33:55,455 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93587.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:33:59,455 INFO [finetune.py:976] (6/7) Epoch 17, batch 1950, loss[loss=0.1509, simple_loss=0.2238, pruned_loss=0.03895, over 4744.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2546, pruned_loss=0.058, over 952999.57 frames. ], batch size: 26, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:34:06,617 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93603.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:16,818 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5221, 0.9223, 0.7450, 1.2589, 1.9586, 0.6705, 1.0736, 1.2747], device='cuda:6'), covar=tensor([0.1540, 0.2448, 0.1895, 0.1376, 0.1955, 0.2118, 0.1726, 0.2100], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0092, 0.0118, 0.0094, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:34:25,050 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93631.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:27,391 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93635.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:31,535 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5143, 2.1935, 2.8375, 1.9153, 2.7343, 2.7498, 2.0313, 2.8824], device='cuda:6'), covar=tensor([0.1228, 0.1840, 0.1455, 0.1889, 0.0715, 0.1354, 0.2435, 0.0709], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0205, 0.0190, 0.0190, 0.0177, 0.0214, 0.0218, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:34:32,610 INFO [finetune.py:976] (6/7) Epoch 17, batch 2000, loss[loss=0.1748, simple_loss=0.2463, pruned_loss=0.05172, over 4930.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2525, pruned_loss=0.05753, over 953682.26 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:34:37,203 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.489e+01 1.528e+02 1.753e+02 2.103e+02 5.258e+02, threshold=3.506e+02, percent-clipped=1.0 2023-03-26 20:34:39,684 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3394, 1.5184, 1.5723, 0.9012, 1.5207, 1.8111, 1.8246, 1.4213], device='cuda:6'), covar=tensor([0.0828, 0.0533, 0.0513, 0.0502, 0.0417, 0.0522, 0.0323, 0.0615], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0124, 0.0128, 0.0132, 0.0129, 0.0144, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.2335e-05, 1.1033e-04, 8.9091e-05, 9.1275e-05, 9.2970e-05, 9.3154e-05, 1.0363e-04, 1.0742e-04], device='cuda:6') 2023-03-26 20:34:48,160 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93666.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:34:56,475 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93679.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:05,807 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:06,329 INFO [finetune.py:976] (6/7) Epoch 17, batch 2050, loss[loss=0.209, simple_loss=0.2635, pruned_loss=0.07726, over 4936.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2504, pruned_loss=0.05711, over 954782.18 frames. ], batch size: 33, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:35:20,566 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93714.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:37,565 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.59 vs. limit=5.0 2023-03-26 20:35:37,805 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93740.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:39,568 INFO [finetune.py:976] (6/7) Epoch 17, batch 2100, loss[loss=0.1443, simple_loss=0.2169, pruned_loss=0.03583, over 4822.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2504, pruned_loss=0.05731, over 955095.91 frames. ], batch size: 25, lr: 3.40e-03, grad_scale: 32.0 2023-03-26 20:35:43,621 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.402e+01 1.568e+02 1.860e+02 2.232e+02 5.340e+02, threshold=3.720e+02, percent-clipped=4.0 2023-03-26 20:35:56,225 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6149, 2.5352, 2.1035, 0.9339, 2.2064, 1.9586, 1.8217, 2.1870], device='cuda:6'), covar=tensor([0.0880, 0.0711, 0.1338, 0.1964, 0.1422, 0.2178, 0.2049, 0.1017], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0193, 0.0196, 0.0181, 0.0210, 0.0204, 0.0221, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:35:57,994 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93770.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:35:58,604 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=93771.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:36:07,747 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 20:36:13,277 INFO [finetune.py:976] (6/7) Epoch 17, batch 2150, loss[loss=0.227, simple_loss=0.3095, pruned_loss=0.07223, over 4826.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2527, pruned_loss=0.05809, over 955886.71 frames. ], batch size: 40, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:36:31,173 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=93819.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:36:38,633 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=93831.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:36:47,327 INFO [finetune.py:976] (6/7) Epoch 17, batch 2200, loss[loss=0.1897, simple_loss=0.2665, pruned_loss=0.05642, over 4815.00 frames. ], tot_loss[loss=0.1861, simple_loss=0.2555, pruned_loss=0.05836, over 956707.96 frames. ], batch size: 51, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:36:51,480 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.198e+02 1.569e+02 1.869e+02 2.308e+02 4.137e+02, threshold=3.738e+02, percent-clipped=1.0 2023-03-26 20:37:19,380 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1787, 1.9855, 2.1573, 1.5356, 2.1710, 2.2598, 2.3563, 1.7888], device='cuda:6'), covar=tensor([0.0546, 0.0590, 0.0645, 0.0941, 0.0611, 0.0610, 0.0510, 0.1036], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0135, 0.0141, 0.0123, 0.0124, 0.0140, 0.0142, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:37:25,360 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9242, 1.8233, 1.7309, 2.0319, 1.4573, 4.6449, 1.6472, 2.1702], device='cuda:6'), covar=tensor([0.3263, 0.2468, 0.2080, 0.2235, 0.1650, 0.0129, 0.2457, 0.1227], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:37:36,157 INFO [finetune.py:976] (6/7) Epoch 17, batch 2250, loss[loss=0.2114, simple_loss=0.2746, pruned_loss=0.07416, over 4893.00 frames. ], tot_loss[loss=0.1865, simple_loss=0.2559, pruned_loss=0.05856, over 956396.34 frames. ], batch size: 32, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:38:15,215 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.55 vs. limit=2.0 2023-03-26 20:38:25,535 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-26 20:38:30,051 INFO [finetune.py:976] (6/7) Epoch 17, batch 2300, loss[loss=0.1744, simple_loss=0.2456, pruned_loss=0.05162, over 4814.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2546, pruned_loss=0.0572, over 958205.12 frames. ], batch size: 41, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:38:34,187 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.189e+02 1.601e+02 1.890e+02 2.328e+02 3.292e+02, threshold=3.781e+02, percent-clipped=0.0 2023-03-26 20:38:56,120 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=93981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:39:03,827 INFO [finetune.py:976] (6/7) Epoch 17, batch 2350, loss[loss=0.1537, simple_loss=0.2224, pruned_loss=0.04246, over 4924.00 frames. ], tot_loss[loss=0.182, simple_loss=0.252, pruned_loss=0.05603, over 958281.05 frames. ], batch size: 33, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:39:37,892 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94042.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:39:38,384 INFO [finetune.py:976] (6/7) Epoch 17, batch 2400, loss[loss=0.1544, simple_loss=0.2247, pruned_loss=0.04209, over 4894.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2493, pruned_loss=0.05531, over 959918.06 frames. ], batch size: 32, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:39:42,505 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.475e+02 1.781e+02 2.110e+02 4.538e+02, threshold=3.563e+02, percent-clipped=2.0 2023-03-26 20:40:09,955 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3090, 1.2822, 1.5867, 1.0906, 1.3484, 1.4639, 1.2728, 1.6369], device='cuda:6'), covar=tensor([0.1129, 0.2167, 0.1163, 0.1492, 0.0824, 0.1251, 0.2840, 0.0685], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0204, 0.0189, 0.0190, 0.0176, 0.0212, 0.0216, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:40:11,636 INFO [finetune.py:976] (6/7) Epoch 17, batch 2450, loss[loss=0.1807, simple_loss=0.2587, pruned_loss=0.05128, over 4826.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2472, pruned_loss=0.05473, over 960721.77 frames. ], batch size: 51, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:40:25,646 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-26 20:40:34,650 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94126.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:40:45,407 INFO [finetune.py:976] (6/7) Epoch 17, batch 2500, loss[loss=0.2019, simple_loss=0.2769, pruned_loss=0.06344, over 4873.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2486, pruned_loss=0.05494, over 960453.41 frames. ], batch size: 34, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:40:49,550 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.530e+01 1.683e+02 1.909e+02 2.220e+02 4.342e+02, threshold=3.819e+02, percent-clipped=2.0 2023-03-26 20:40:50,327 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7702, 1.6032, 1.4839, 1.7979, 2.1010, 1.8613, 1.3814, 1.5103], device='cuda:6'), covar=tensor([0.2379, 0.2152, 0.2165, 0.1875, 0.1640, 0.1293, 0.2617, 0.2155], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0192, 0.0241, 0.0185, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:40:57,261 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4510, 1.4174, 1.8188, 1.7490, 1.5502, 3.4057, 1.3788, 1.5398], device='cuda:6'), covar=tensor([0.0988, 0.1926, 0.1151, 0.1036, 0.1770, 0.0228, 0.1607, 0.1905], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0079, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:41:04,837 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.9643, 4.4697, 4.7202, 4.6306, 4.3930, 4.3552, 5.1522, 1.7170], device='cuda:6'), covar=tensor([0.1053, 0.1566, 0.1254, 0.2118, 0.1799, 0.2208, 0.0798, 0.7704], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0276, 0.0293, 0.0335, 0.0281, 0.0301, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:41:18,600 INFO [finetune.py:976] (6/7) Epoch 17, batch 2550, loss[loss=0.2346, simple_loss=0.3095, pruned_loss=0.07985, over 4909.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2511, pruned_loss=0.05531, over 957884.88 frames. ], batch size: 37, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:41:29,345 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8526, 3.7772, 3.5534, 1.6971, 3.8735, 2.9794, 0.7197, 2.6521], device='cuda:6'), covar=tensor([0.2130, 0.2050, 0.1501, 0.3480, 0.1055, 0.0906, 0.4661, 0.1440], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0175, 0.0158, 0.0129, 0.0158, 0.0123, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:41:38,787 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=94223.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:41:52,380 INFO [finetune.py:976] (6/7) Epoch 17, batch 2600, loss[loss=0.1435, simple_loss=0.2189, pruned_loss=0.03403, over 4757.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2525, pruned_loss=0.05625, over 957091.32 frames. ], batch size: 28, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:41:56,025 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.647e+02 1.955e+02 2.265e+02 3.573e+02, threshold=3.911e+02, percent-clipped=0.0 2023-03-26 20:42:13,950 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2212, 1.3834, 1.3761, 1.3899, 1.4910, 2.4226, 1.3033, 1.4908], device='cuda:6'), covar=tensor([0.0976, 0.1815, 0.1042, 0.0917, 0.1694, 0.0384, 0.1515, 0.1758], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0078, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:42:19,672 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94284.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:42:25,433 INFO [finetune.py:976] (6/7) Epoch 17, batch 2650, loss[loss=0.199, simple_loss=0.2691, pruned_loss=0.0645, over 4902.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2537, pruned_loss=0.05652, over 954899.74 frames. ], batch size: 37, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:43:04,265 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7288, 1.3524, 0.9185, 1.5801, 2.0884, 1.3733, 1.4379, 1.5465], device='cuda:6'), covar=tensor([0.1575, 0.2173, 0.1973, 0.1330, 0.2018, 0.2121, 0.1625, 0.2064], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:43:04,672 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 20:43:12,616 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94337.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:43:20,898 INFO [finetune.py:976] (6/7) Epoch 17, batch 2700, loss[loss=0.237, simple_loss=0.2973, pruned_loss=0.08835, over 4839.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2521, pruned_loss=0.05575, over 955143.85 frames. ], batch size: 47, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:43:21,626 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.9899, 0.8424, 0.9065, 1.0751, 1.1284, 1.0797, 0.9352, 0.8903], device='cuda:6'), covar=tensor([0.0391, 0.0330, 0.0696, 0.0313, 0.0293, 0.0511, 0.0400, 0.0444], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0107, 0.0142, 0.0112, 0.0099, 0.0107, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3042e-05, 8.2570e-05, 1.1207e-04, 8.6138e-05, 7.7043e-05, 7.9073e-05, 7.3191e-05, 8.2885e-05], device='cuda:6') 2023-03-26 20:43:27,718 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=94348.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:43:28,198 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.514e+02 1.766e+02 2.145e+02 4.618e+02, threshold=3.532e+02, percent-clipped=2.0 2023-03-26 20:43:40,876 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-26 20:43:51,601 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6021, 1.5271, 2.0686, 3.1625, 2.2433, 2.4475, 1.2652, 2.6421], device='cuda:6'), covar=tensor([0.1819, 0.1461, 0.1300, 0.0671, 0.0823, 0.1277, 0.1625, 0.0555], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0167, 0.0102, 0.0138, 0.0126, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 20:44:10,178 INFO [finetune.py:976] (6/7) Epoch 17, batch 2750, loss[loss=0.1877, simple_loss=0.244, pruned_loss=0.06567, over 4818.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.25, pruned_loss=0.05539, over 956623.29 frames. ], batch size: 38, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:44:20,195 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=94409.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 20:44:31,944 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94426.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:44:32,604 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2855, 1.7469, 2.3042, 2.1597, 1.9259, 1.9969, 2.1482, 2.0720], device='cuda:6'), covar=tensor([0.4069, 0.4254, 0.3034, 0.4036, 0.4862, 0.3607, 0.4718, 0.3113], device='cuda:6'), in_proj_covar=tensor([0.0248, 0.0240, 0.0259, 0.0274, 0.0271, 0.0246, 0.0284, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:44:43,081 INFO [finetune.py:976] (6/7) Epoch 17, batch 2800, loss[loss=0.1905, simple_loss=0.2479, pruned_loss=0.06657, over 4825.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2481, pruned_loss=0.05531, over 954497.44 frames. ], batch size: 41, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:44:47,185 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.577e+01 1.587e+02 1.887e+02 2.313e+02 4.372e+02, threshold=3.775e+02, percent-clipped=5.0 2023-03-26 20:44:55,811 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4981, 1.5995, 1.6744, 1.0007, 1.6688, 1.9393, 1.8119, 1.4605], device='cuda:6'), covar=tensor([0.0928, 0.0634, 0.0497, 0.0565, 0.0461, 0.0619, 0.0386, 0.0699], device='cuda:6'), in_proj_covar=tensor([0.0127, 0.0154, 0.0125, 0.0129, 0.0133, 0.0131, 0.0145, 0.0150], device='cuda:6'), out_proj_covar=tensor([9.3347e-05, 1.1134e-04, 8.9849e-05, 9.2042e-05, 9.3824e-05, 9.4270e-05, 1.0477e-04, 1.0837e-04], device='cuda:6') 2023-03-26 20:45:03,347 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=94474.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:45:09,939 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3396, 1.3514, 1.2009, 1.3876, 1.6727, 1.5444, 1.3321, 1.2441], device='cuda:6'), covar=tensor([0.0350, 0.0281, 0.0571, 0.0277, 0.0193, 0.0416, 0.0303, 0.0354], device='cuda:6'), in_proj_covar=tensor([0.0094, 0.0106, 0.0141, 0.0111, 0.0098, 0.0107, 0.0097, 0.0108], device='cuda:6'), out_proj_covar=tensor([7.2620e-05, 8.2064e-05, 1.1144e-04, 8.5567e-05, 7.6462e-05, 7.8809e-05, 7.2894e-05, 8.2584e-05], device='cuda:6') 2023-03-26 20:45:16,200 INFO [finetune.py:976] (6/7) Epoch 17, batch 2850, loss[loss=0.1796, simple_loss=0.2388, pruned_loss=0.06017, over 4719.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2469, pruned_loss=0.0552, over 955389.80 frames. ], batch size: 23, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:45:49,610 INFO [finetune.py:976] (6/7) Epoch 17, batch 2900, loss[loss=0.1721, simple_loss=0.2607, pruned_loss=0.04179, over 4825.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.25, pruned_loss=0.05588, over 954259.09 frames. ], batch size: 30, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:45:53,202 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.550e+02 1.801e+02 2.117e+02 3.911e+02, threshold=3.601e+02, percent-clipped=1.0 2023-03-26 20:46:12,850 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94579.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 20:46:22,375 INFO [finetune.py:976] (6/7) Epoch 17, batch 2950, loss[loss=0.233, simple_loss=0.3103, pruned_loss=0.07783, over 4049.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2522, pruned_loss=0.05649, over 953684.52 frames. ], batch size: 65, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:46:37,204 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-26 20:46:46,981 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1696, 1.3264, 1.4029, 1.3478, 1.4486, 2.3896, 1.2398, 1.4143], device='cuda:6'), covar=tensor([0.0953, 0.1737, 0.1111, 0.0918, 0.1565, 0.0383, 0.1444, 0.1653], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0078, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 20:46:48,230 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3745, 2.2947, 2.3668, 1.6656, 2.2800, 2.3928, 2.4129, 1.9574], device='cuda:6'), covar=tensor([0.0632, 0.0625, 0.0656, 0.0896, 0.0606, 0.0699, 0.0643, 0.1027], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0140, 0.0123, 0.0124, 0.0140, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:46:52,134 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94637.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:46:56,162 INFO [finetune.py:976] (6/7) Epoch 17, batch 3000, loss[loss=0.1606, simple_loss=0.2327, pruned_loss=0.04427, over 4869.00 frames. ], tot_loss[loss=0.1839, simple_loss=0.2535, pruned_loss=0.05713, over 954648.61 frames. ], batch size: 31, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:46:56,162 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 20:47:06,772 INFO [finetune.py:1010] (6/7) Epoch 17, validation: loss=0.1562, simple_loss=0.2257, pruned_loss=0.04335, over 2265189.00 frames. 2023-03-26 20:47:06,772 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 20:47:09,609 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-26 20:47:10,420 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.605e+02 1.916e+02 2.337e+02 3.800e+02, threshold=3.832e+02, percent-clipped=2.0 2023-03-26 20:47:33,747 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=94685.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:47:38,999 INFO [finetune.py:976] (6/7) Epoch 17, batch 3050, loss[loss=0.1904, simple_loss=0.2549, pruned_loss=0.06295, over 4708.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2534, pruned_loss=0.05649, over 955229.79 frames. ], batch size: 23, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:47:47,346 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=94704.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:48:19,139 INFO [finetune.py:976] (6/7) Epoch 17, batch 3100, loss[loss=0.1569, simple_loss=0.2203, pruned_loss=0.0467, over 4789.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.251, pruned_loss=0.05561, over 955732.57 frames. ], batch size: 26, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:48:27,685 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.533e+02 1.793e+02 2.269e+02 8.706e+02, threshold=3.585e+02, percent-clipped=3.0 2023-03-26 20:49:17,518 INFO [finetune.py:976] (6/7) Epoch 17, batch 3150, loss[loss=0.1849, simple_loss=0.247, pruned_loss=0.06134, over 4820.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2497, pruned_loss=0.05558, over 957156.49 frames. ], batch size: 25, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:49:51,400 INFO [finetune.py:976] (6/7) Epoch 17, batch 3200, loss[loss=0.1151, simple_loss=0.1969, pruned_loss=0.01664, over 4759.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2463, pruned_loss=0.05433, over 957927.84 frames. ], batch size: 27, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:49:55,533 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.463e+02 1.766e+02 2.027e+02 4.168e+02, threshold=3.532e+02, percent-clipped=1.0 2023-03-26 20:50:16,321 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=94879.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:50:25,247 INFO [finetune.py:976] (6/7) Epoch 17, batch 3250, loss[loss=0.2259, simple_loss=0.2862, pruned_loss=0.0828, over 4837.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2477, pruned_loss=0.05536, over 957852.29 frames. ], batch size: 33, lr: 3.39e-03, grad_scale: 32.0 2023-03-26 20:50:48,436 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=94927.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:50:58,673 INFO [finetune.py:976] (6/7) Epoch 17, batch 3300, loss[loss=0.1734, simple_loss=0.2274, pruned_loss=0.05973, over 4826.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2517, pruned_loss=0.05626, over 956960.92 frames. ], batch size: 25, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:51:02,381 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.215e+02 1.785e+02 2.188e+02 2.532e+02 5.228e+02, threshold=4.375e+02, percent-clipped=4.0 2023-03-26 20:51:32,708 INFO [finetune.py:976] (6/7) Epoch 17, batch 3350, loss[loss=0.1295, simple_loss=0.2055, pruned_loss=0.02677, over 4789.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2552, pruned_loss=0.05727, over 957465.55 frames. ], batch size: 26, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:51:36,490 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6437, 3.7062, 3.5403, 1.6319, 3.8360, 2.8605, 0.7737, 2.6358], device='cuda:6'), covar=tensor([0.2563, 0.1881, 0.1534, 0.3396, 0.0988, 0.0984, 0.4307, 0.1385], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0176, 0.0159, 0.0129, 0.0159, 0.0124, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:51:40,203 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95004.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:51:43,882 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 20:52:06,010 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8832, 1.6280, 1.4855, 1.2459, 1.6081, 1.5599, 1.5978, 2.1895], device='cuda:6'), covar=tensor([0.3792, 0.3779, 0.3152, 0.3546, 0.3551, 0.2413, 0.3416, 0.1776], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0228, 0.0277, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:52:06,488 INFO [finetune.py:976] (6/7) Epoch 17, batch 3400, loss[loss=0.2317, simple_loss=0.2936, pruned_loss=0.08488, over 4903.00 frames. ], tot_loss[loss=0.1869, simple_loss=0.2567, pruned_loss=0.05854, over 954558.07 frames. ], batch size: 37, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:52:10,131 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.553e+02 1.863e+02 2.086e+02 3.757e+02, threshold=3.727e+02, percent-clipped=0.0 2023-03-26 20:52:12,047 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95052.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:52:16,216 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5854, 1.4094, 1.3001, 1.5631, 1.6945, 1.5543, 1.1440, 1.3183], device='cuda:6'), covar=tensor([0.2194, 0.2122, 0.1922, 0.1664, 0.1555, 0.1335, 0.2400, 0.1925], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0210, 0.0213, 0.0192, 0.0242, 0.0187, 0.0216, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:52:19,900 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7138, 1.4654, 1.8417, 1.2348, 1.7334, 1.8173, 1.3889, 1.9576], device='cuda:6'), covar=tensor([0.1076, 0.1919, 0.1293, 0.1790, 0.0768, 0.1319, 0.2798, 0.0718], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0204, 0.0190, 0.0190, 0.0177, 0.0212, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:52:29,396 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.79 vs. limit=5.0 2023-03-26 20:52:40,296 INFO [finetune.py:976] (6/7) Epoch 17, batch 3450, loss[loss=0.1599, simple_loss=0.2266, pruned_loss=0.04656, over 4739.00 frames. ], tot_loss[loss=0.1858, simple_loss=0.2559, pruned_loss=0.05784, over 953982.96 frames. ], batch size: 54, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:52:45,930 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2758, 2.2692, 1.7255, 2.2225, 2.1340, 1.8143, 2.5484, 2.2050], device='cuda:6'), covar=tensor([0.1289, 0.1979, 0.2972, 0.2575, 0.2639, 0.1597, 0.2875, 0.1805], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0188, 0.0234, 0.0252, 0.0244, 0.0202, 0.0212, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:52:47,679 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95105.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 20:53:10,909 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 20:53:13,090 INFO [finetune.py:976] (6/7) Epoch 17, batch 3500, loss[loss=0.1584, simple_loss=0.2156, pruned_loss=0.05056, over 4857.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2536, pruned_loss=0.05728, over 955666.86 frames. ], batch size: 49, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:53:17,181 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.619e+02 1.940e+02 2.279e+02 3.817e+02, threshold=3.880e+02, percent-clipped=1.0 2023-03-26 20:53:35,206 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95166.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 20:53:43,072 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2401, 2.0567, 1.7907, 2.0192, 1.9063, 1.9044, 1.9464, 2.7313], device='cuda:6'), covar=tensor([0.3563, 0.4162, 0.3298, 0.3800, 0.3957, 0.2473, 0.4002, 0.1660], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0227, 0.0276, 0.0251, 0.0219, 0.0250, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:53:47,154 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6171, 3.1444, 3.0514, 1.5521, 3.2871, 2.5324, 1.0467, 2.3521], device='cuda:6'), covar=tensor([0.2928, 0.2211, 0.1902, 0.3222, 0.1345, 0.1031, 0.4025, 0.1475], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0176, 0.0160, 0.0129, 0.0159, 0.0124, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:54:05,084 INFO [finetune.py:976] (6/7) Epoch 17, batch 3550, loss[loss=0.1624, simple_loss=0.2339, pruned_loss=0.04545, over 4828.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.2505, pruned_loss=0.05621, over 955887.92 frames. ], batch size: 33, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:54:49,398 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.82 vs. limit=5.0 2023-03-26 20:54:51,063 INFO [finetune.py:976] (6/7) Epoch 17, batch 3600, loss[loss=0.1483, simple_loss=0.226, pruned_loss=0.03527, over 4830.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2473, pruned_loss=0.05502, over 957967.00 frames. ], batch size: 30, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:54:54,641 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.580e+02 1.871e+02 2.182e+02 4.206e+02, threshold=3.742e+02, percent-clipped=1.0 2023-03-26 20:55:01,341 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9404, 4.3310, 4.1522, 2.1399, 4.4372, 3.3661, 1.0420, 2.9965], device='cuda:6'), covar=tensor([0.2600, 0.1588, 0.1261, 0.3272, 0.0744, 0.0842, 0.4362, 0.1403], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0175, 0.0159, 0.0129, 0.0159, 0.0124, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 20:55:02,008 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:06,883 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:12,784 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2448, 2.0215, 1.3980, 0.5368, 1.6197, 1.8277, 1.6368, 1.8131], device='cuda:6'), covar=tensor([0.0995, 0.0790, 0.1374, 0.1907, 0.1446, 0.2266, 0.2304, 0.0811], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0197, 0.0200, 0.0184, 0.0214, 0.0209, 0.0225, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:55:13,866 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9229, 1.7467, 2.2285, 1.3685, 2.0722, 2.1718, 1.6567, 2.3548], device='cuda:6'), covar=tensor([0.1320, 0.1909, 0.1521, 0.2324, 0.0937, 0.1549, 0.2724, 0.0857], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0204, 0.0189, 0.0191, 0.0177, 0.0212, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:55:18,441 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3890, 3.7821, 3.9712, 4.2646, 4.1548, 3.8501, 4.4560, 1.4124], device='cuda:6'), covar=tensor([0.0747, 0.0879, 0.0806, 0.0848, 0.1121, 0.1750, 0.0681, 0.5658], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0246, 0.0276, 0.0292, 0.0335, 0.0282, 0.0302, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:55:24,757 INFO [finetune.py:976] (6/7) Epoch 17, batch 3650, loss[loss=0.2109, simple_loss=0.2891, pruned_loss=0.06637, over 4833.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2505, pruned_loss=0.05689, over 957639.56 frames. ], batch size: 39, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:55:29,795 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2562, 2.0501, 1.8376, 2.0052, 1.9571, 1.9261, 2.0460, 2.7800], device='cuda:6'), covar=tensor([0.3354, 0.4328, 0.2945, 0.3641, 0.3790, 0.2368, 0.3700, 0.1565], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0227, 0.0277, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:55:42,969 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:48,239 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:55,819 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95339.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:55:58,602 INFO [finetune.py:976] (6/7) Epoch 17, batch 3700, loss[loss=0.1929, simple_loss=0.2649, pruned_loss=0.06048, over 4899.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2533, pruned_loss=0.05741, over 957191.53 frames. ], batch size: 43, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:56:02,235 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.605e+02 1.907e+02 2.409e+02 3.957e+02, threshold=3.813e+02, percent-clipped=4.0 2023-03-26 20:56:31,739 INFO [finetune.py:976] (6/7) Epoch 17, batch 3750, loss[loss=0.1901, simple_loss=0.2717, pruned_loss=0.05423, over 4895.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2539, pruned_loss=0.05733, over 954810.93 frames. ], batch size: 37, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:56:36,733 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95400.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:57:04,531 INFO [finetune.py:976] (6/7) Epoch 17, batch 3800, loss[loss=0.1626, simple_loss=0.2378, pruned_loss=0.04371, over 4786.00 frames. ], tot_loss[loss=0.1848, simple_loss=0.2546, pruned_loss=0.05744, over 952729.25 frames. ], batch size: 51, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:57:09,548 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.488e+02 1.737e+02 2.235e+02 4.648e+02, threshold=3.475e+02, percent-clipped=3.0 2023-03-26 20:57:16,353 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0740, 1.6929, 1.8827, 0.8599, 2.1531, 2.4249, 1.8918, 1.7726], device='cuda:6'), covar=tensor([0.1061, 0.1198, 0.0604, 0.0852, 0.0706, 0.0754, 0.0715, 0.0886], device='cuda:6'), in_proj_covar=tensor([0.0126, 0.0152, 0.0124, 0.0128, 0.0132, 0.0130, 0.0144, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.2530e-05, 1.1043e-04, 8.8856e-05, 9.0985e-05, 9.2837e-05, 9.3164e-05, 1.0394e-04, 1.0727e-04], device='cuda:6') 2023-03-26 20:57:16,898 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95461.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 20:57:37,564 INFO [finetune.py:976] (6/7) Epoch 17, batch 3850, loss[loss=0.1468, simple_loss=0.2158, pruned_loss=0.03894, over 4762.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2531, pruned_loss=0.0567, over 953015.12 frames. ], batch size: 26, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:57:42,732 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5873, 1.8979, 1.4988, 1.6395, 2.1755, 1.9497, 1.7881, 1.8416], device='cuda:6'), covar=tensor([0.0447, 0.0338, 0.0505, 0.0355, 0.0229, 0.0748, 0.0357, 0.0394], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0108, 0.0143, 0.0113, 0.0099, 0.0109, 0.0099, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.4014e-05, 8.3327e-05, 1.1320e-04, 8.7090e-05, 7.7144e-05, 8.0629e-05, 7.3834e-05, 8.4177e-05], device='cuda:6') 2023-03-26 20:58:10,766 INFO [finetune.py:976] (6/7) Epoch 17, batch 3900, loss[loss=0.1666, simple_loss=0.2204, pruned_loss=0.05643, over 4822.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2503, pruned_loss=0.0562, over 955134.62 frames. ], batch size: 30, lr: 3.38e-03, grad_scale: 64.0 2023-03-26 20:58:15,379 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.225e+02 1.550e+02 1.834e+02 2.229e+02 4.290e+02, threshold=3.669e+02, percent-clipped=3.0 2023-03-26 20:58:46,332 INFO [finetune.py:976] (6/7) Epoch 17, batch 3950, loss[loss=0.1747, simple_loss=0.2406, pruned_loss=0.05438, over 4835.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2474, pruned_loss=0.05525, over 955501.53 frames. ], batch size: 49, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:58:51,808 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1669, 1.9357, 1.7098, 1.8300, 1.8097, 1.8564, 1.8747, 2.6723], device='cuda:6'), covar=tensor([0.3668, 0.4127, 0.3147, 0.3809, 0.3742, 0.2362, 0.3906, 0.1652], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0227, 0.0277, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:59:04,366 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:59:05,028 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95617.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:59:07,534 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5922, 1.4389, 1.3226, 1.6171, 1.7500, 1.6365, 1.1336, 1.3658], device='cuda:6'), covar=tensor([0.2048, 0.2007, 0.1771, 0.1564, 0.1478, 0.1196, 0.2349, 0.1792], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0208, 0.0211, 0.0190, 0.0240, 0.0186, 0.0214, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 20:59:13,753 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 20:59:34,009 INFO [finetune.py:976] (6/7) Epoch 17, batch 4000, loss[loss=0.1574, simple_loss=0.223, pruned_loss=0.04591, over 3977.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2466, pruned_loss=0.05541, over 956192.19 frames. ], batch size: 17, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 20:59:42,362 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.540e+02 1.979e+02 2.285e+02 3.877e+02, threshold=3.958e+02, percent-clipped=2.0 2023-03-26 21:00:12,631 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95678.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:00:26,084 INFO [finetune.py:976] (6/7) Epoch 17, batch 4050, loss[loss=0.1841, simple_loss=0.2611, pruned_loss=0.05359, over 4770.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2494, pruned_loss=0.05601, over 953858.35 frames. ], batch size: 59, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:00:27,351 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95695.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:00:45,728 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 21:00:59,872 INFO [finetune.py:976] (6/7) Epoch 17, batch 4100, loss[loss=0.1512, simple_loss=0.2129, pruned_loss=0.04473, over 4738.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2514, pruned_loss=0.05643, over 953940.02 frames. ], batch size: 23, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:01:04,067 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.600e+02 1.864e+02 2.304e+02 4.240e+02, threshold=3.729e+02, percent-clipped=2.0 2023-03-26 21:01:12,318 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95761.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:01:33,031 INFO [finetune.py:976] (6/7) Epoch 17, batch 4150, loss[loss=0.1457, simple_loss=0.2254, pruned_loss=0.03294, over 4865.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.254, pruned_loss=0.05769, over 954494.09 frames. ], batch size: 31, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:01:44,400 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95809.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:01:44,412 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9526, 4.6024, 4.3369, 2.3290, 4.7482, 3.5786, 0.8529, 3.2040], device='cuda:6'), covar=tensor([0.2093, 0.1352, 0.1378, 0.2985, 0.0582, 0.0748, 0.4314, 0.1236], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0176, 0.0160, 0.0129, 0.0159, 0.0124, 0.0147, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 21:01:46,232 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95812.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:02:06,757 INFO [finetune.py:976] (6/7) Epoch 17, batch 4200, loss[loss=0.1913, simple_loss=0.2593, pruned_loss=0.06168, over 4811.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2547, pruned_loss=0.05738, over 953229.84 frames. ], batch size: 40, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:02:09,293 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95847.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:02:09,903 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8975, 1.4387, 0.8272, 1.6849, 2.2079, 1.5464, 1.7015, 1.6613], device='cuda:6'), covar=tensor([0.1568, 0.2082, 0.1944, 0.1350, 0.1790, 0.1891, 0.1447, 0.2118], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0118, 0.0094, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:02:11,507 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.538e+02 1.932e+02 2.354e+02 8.206e+02, threshold=3.863e+02, percent-clipped=2.0 2023-03-26 21:02:27,894 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95873.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:02:29,086 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9989, 1.8320, 2.2636, 1.3590, 2.0821, 2.2406, 1.6752, 2.4088], device='cuda:6'), covar=tensor([0.1376, 0.2024, 0.1624, 0.2301, 0.1029, 0.1628, 0.2680, 0.0870], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0203, 0.0189, 0.0189, 0.0176, 0.0211, 0.0215, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:02:39,921 INFO [finetune.py:976] (6/7) Epoch 17, batch 4250, loss[loss=0.1966, simple_loss=0.2585, pruned_loss=0.06734, over 4792.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2525, pruned_loss=0.05632, over 954257.21 frames. ], batch size: 45, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:02:43,533 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8469, 3.3810, 3.5039, 3.6990, 3.6309, 3.3697, 3.9217, 1.2378], device='cuda:6'), covar=tensor([0.0886, 0.0964, 0.1002, 0.1040, 0.1300, 0.1725, 0.0875, 0.5352], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0245, 0.0276, 0.0292, 0.0336, 0.0282, 0.0301, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:02:50,097 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=95908.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:02:55,856 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:02,187 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:12,436 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0483, 1.7772, 2.3600, 1.5012, 2.1370, 2.3085, 1.6757, 2.5364], device='cuda:6'), covar=tensor([0.1284, 0.2006, 0.1407, 0.2198, 0.1003, 0.1510, 0.2866, 0.0782], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0203, 0.0189, 0.0189, 0.0176, 0.0211, 0.0215, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:03:13,542 INFO [finetune.py:976] (6/7) Epoch 17, batch 4300, loss[loss=0.1785, simple_loss=0.2553, pruned_loss=0.05091, over 4871.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2499, pruned_loss=0.05599, over 954894.21 frames. ], batch size: 34, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:03:18,260 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.465e+02 1.654e+02 2.123e+02 3.225e+02, threshold=3.308e+02, percent-clipped=0.0 2023-03-26 21:03:27,770 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:32,763 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 21:03:33,598 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=95972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:34,210 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=95973.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:39,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=95980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:47,265 INFO [finetune.py:976] (6/7) Epoch 17, batch 4350, loss[loss=0.1978, simple_loss=0.252, pruned_loss=0.07184, over 4937.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2466, pruned_loss=0.0547, over 955650.72 frames. ], batch size: 38, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:03:48,534 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=95995.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:03:54,923 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-26 21:04:20,845 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96041.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:04:21,942 INFO [finetune.py:976] (6/7) Epoch 17, batch 4400, loss[loss=0.177, simple_loss=0.2608, pruned_loss=0.04666, over 4900.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2469, pruned_loss=0.05477, over 953824.17 frames. ], batch size: 36, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:04:22,006 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96043.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:04:28,702 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.398e+01 1.490e+02 1.749e+02 2.200e+02 3.209e+02, threshold=3.497e+02, percent-clipped=0.0 2023-03-26 21:04:59,730 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96080.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:05:13,436 INFO [finetune.py:976] (6/7) Epoch 17, batch 4450, loss[loss=0.1868, simple_loss=0.2646, pruned_loss=0.0545, over 4862.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2501, pruned_loss=0.05582, over 954035.81 frames. ], batch size: 49, lr: 3.38e-03, grad_scale: 32.0 2023-03-26 21:05:58,467 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96141.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:05:59,554 INFO [finetune.py:976] (6/7) Epoch 17, batch 4500, loss[loss=0.1596, simple_loss=0.2341, pruned_loss=0.04255, over 4751.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2531, pruned_loss=0.05691, over 954578.13 frames. ], batch size: 27, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:06:03,840 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.724e+02 1.946e+02 2.358e+02 4.504e+02, threshold=3.891e+02, percent-clipped=3.0 2023-03-26 21:06:13,982 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96165.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:06:15,775 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96168.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:06:17,944 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 21:06:33,243 INFO [finetune.py:976] (6/7) Epoch 17, batch 4550, loss[loss=0.2081, simple_loss=0.2809, pruned_loss=0.06762, over 4896.00 frames. ], tot_loss[loss=0.1853, simple_loss=0.2546, pruned_loss=0.05794, over 954299.32 frames. ], batch size: 32, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:06:39,495 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96203.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:06:54,575 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96226.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:07:07,175 INFO [finetune.py:976] (6/7) Epoch 17, batch 4600, loss[loss=0.1388, simple_loss=0.2125, pruned_loss=0.03258, over 4820.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.254, pruned_loss=0.05733, over 954135.45 frames. ], batch size: 25, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:07:08,093 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 21:07:11,422 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.530e+02 1.886e+02 2.340e+02 4.335e+02, threshold=3.772e+02, percent-clipped=2.0 2023-03-26 21:07:26,488 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96273.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:07:40,044 INFO [finetune.py:976] (6/7) Epoch 17, batch 4650, loss[loss=0.1518, simple_loss=0.2292, pruned_loss=0.03717, over 4818.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2512, pruned_loss=0.05653, over 953410.61 frames. ], batch size: 39, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:07:57,961 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-26 21:07:58,451 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:00,257 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96324.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:08,017 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:13,200 INFO [finetune.py:976] (6/7) Epoch 17, batch 4700, loss[loss=0.1556, simple_loss=0.2262, pruned_loss=0.04246, over 4908.00 frames. ], tot_loss[loss=0.1807, simple_loss=0.2488, pruned_loss=0.05628, over 952683.62 frames. ], batch size: 36, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:08:18,319 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.531e+02 1.882e+02 2.216e+02 4.319e+02, threshold=3.764e+02, percent-clipped=2.0 2023-03-26 21:08:19,713 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.68 vs. limit=5.0 2023-03-26 21:08:40,448 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:43,993 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96390.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:08:45,654 INFO [finetune.py:976] (6/7) Epoch 17, batch 4750, loss[loss=0.1949, simple_loss=0.2593, pruned_loss=0.06527, over 4896.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.247, pruned_loss=0.05567, over 953885.09 frames. ], batch size: 35, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:08:54,730 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.93 vs. limit=2.0 2023-03-26 21:09:09,994 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1134, 1.3767, 1.3354, 1.2980, 1.4184, 2.4701, 1.2229, 1.4355], device='cuda:6'), covar=tensor([0.0991, 0.1803, 0.1130, 0.0978, 0.1660, 0.0382, 0.1557, 0.1699], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0078, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 21:09:13,610 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0802, 2.1239, 1.7488, 2.1534, 2.0019, 2.0157, 1.9851, 2.9144], device='cuda:6'), covar=tensor([0.4114, 0.4783, 0.3557, 0.4709, 0.4838, 0.2669, 0.4589, 0.1753], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0228, 0.0278, 0.0253, 0.0220, 0.0252, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:09:14,761 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96436.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:09:19,422 INFO [finetune.py:976] (6/7) Epoch 17, batch 4800, loss[loss=0.1405, simple_loss=0.2007, pruned_loss=0.04012, over 4376.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2498, pruned_loss=0.0569, over 953219.77 frames. ], batch size: 19, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:09:25,023 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.609e+02 1.875e+02 2.422e+02 6.864e+02, threshold=3.750e+02, percent-clipped=2.0 2023-03-26 21:09:26,287 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96451.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:09:36,665 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96468.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:09:54,758 INFO [finetune.py:976] (6/7) Epoch 17, batch 4850, loss[loss=0.1769, simple_loss=0.2679, pruned_loss=0.04294, over 4760.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2523, pruned_loss=0.05658, over 954646.25 frames. ], batch size: 54, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:10:03,423 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96503.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:10:15,704 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96516.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:10:18,747 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96521.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:10:39,579 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 21:10:45,276 INFO [finetune.py:976] (6/7) Epoch 17, batch 4900, loss[loss=0.1871, simple_loss=0.2616, pruned_loss=0.05633, over 4858.00 frames. ], tot_loss[loss=0.1847, simple_loss=0.2548, pruned_loss=0.05731, over 957283.03 frames. ], batch size: 31, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:10:54,622 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.592e+02 1.896e+02 2.164e+02 3.347e+02, threshold=3.792e+02, percent-clipped=0.0 2023-03-26 21:10:55,803 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:11:26,123 INFO [finetune.py:976] (6/7) Epoch 17, batch 4950, loss[loss=0.1907, simple_loss=0.2534, pruned_loss=0.06399, over 4865.00 frames. ], tot_loss[loss=0.1852, simple_loss=0.2554, pruned_loss=0.05747, over 957609.85 frames. ], batch size: 34, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:11:55,617 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96636.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:11:59,762 INFO [finetune.py:976] (6/7) Epoch 17, batch 5000, loss[loss=0.1367, simple_loss=0.2101, pruned_loss=0.03167, over 4895.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2538, pruned_loss=0.05669, over 956181.86 frames. ], batch size: 35, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:12:04,407 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.531e+02 1.819e+02 2.156e+02 3.437e+02, threshold=3.638e+02, percent-clipped=0.0 2023-03-26 21:12:24,601 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:12:26,963 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96684.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:12:33,329 INFO [finetune.py:976] (6/7) Epoch 17, batch 5050, loss[loss=0.1461, simple_loss=0.2184, pruned_loss=0.03691, over 4703.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2515, pruned_loss=0.05621, over 957780.67 frames. ], batch size: 23, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:12:56,441 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7818, 1.6670, 1.5085, 1.8888, 2.0907, 1.9189, 1.4746, 1.4571], device='cuda:6'), covar=tensor([0.2238, 0.2166, 0.2026, 0.1699, 0.1919, 0.1247, 0.2602, 0.1989], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0191, 0.0241, 0.0186, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:12:57,639 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0617, 1.0289, 1.0410, 0.4476, 0.9860, 1.1837, 1.2352, 1.0123], device='cuda:6'), covar=tensor([0.0794, 0.0627, 0.0555, 0.0539, 0.0556, 0.0620, 0.0413, 0.0683], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0122, 0.0126, 0.0129, 0.0128, 0.0142, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.1012e-05, 1.0845e-04, 8.7596e-05, 8.9985e-05, 9.0559e-05, 9.2133e-05, 1.0220e-04, 1.0552e-04], device='cuda:6') 2023-03-26 21:13:01,812 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96736.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:06,444 INFO [finetune.py:976] (6/7) Epoch 17, batch 5100, loss[loss=0.1566, simple_loss=0.2326, pruned_loss=0.04031, over 4855.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2479, pruned_loss=0.05534, over 957359.40 frames. ], batch size: 44, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:13:08,316 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=96746.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:10,589 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.513e+02 1.848e+02 2.246e+02 3.685e+02, threshold=3.695e+02, percent-clipped=1.0 2023-03-26 21:13:28,926 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=96776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:31,877 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6440, 1.2437, 0.8794, 1.5387, 2.1471, 1.0773, 1.3998, 1.5998], device='cuda:6'), covar=tensor([0.1565, 0.2090, 0.1866, 0.1245, 0.1817, 0.1985, 0.1512, 0.1954], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0091, 0.0117, 0.0093, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:13:33,652 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96784.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:13:33,787 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.59 vs. limit=5.0 2023-03-26 21:13:39,092 INFO [finetune.py:976] (6/7) Epoch 17, batch 5150, loss[loss=0.2428, simple_loss=0.3081, pruned_loss=0.0888, over 4829.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2481, pruned_loss=0.05571, over 955270.14 frames. ], batch size: 39, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:13:58,708 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96821.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:14:08,380 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=96837.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:14:11,885 INFO [finetune.py:976] (6/7) Epoch 17, batch 5200, loss[loss=0.2266, simple_loss=0.2978, pruned_loss=0.0777, over 4933.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2522, pruned_loss=0.05686, over 956369.56 frames. ], batch size: 33, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:14:16,598 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.064e+02 1.635e+02 1.977e+02 2.300e+02 5.939e+02, threshold=3.955e+02, percent-clipped=5.0 2023-03-26 21:14:20,453 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 21:14:29,997 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=96869.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:14:31,830 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9746, 3.4653, 3.6657, 3.8286, 3.7466, 3.5001, 4.0424, 1.2942], device='cuda:6'), covar=tensor([0.0801, 0.0935, 0.0849, 0.0963, 0.1196, 0.1558, 0.0761, 0.5491], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0245, 0.0277, 0.0291, 0.0335, 0.0281, 0.0300, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:14:40,809 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5757, 1.4966, 2.1408, 3.3563, 2.1557, 2.2872, 1.1612, 2.7397], device='cuda:6'), covar=tensor([0.1715, 0.1467, 0.1274, 0.0558, 0.0842, 0.1624, 0.1679, 0.0521], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0135, 0.0166, 0.0102, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:14:44,942 INFO [finetune.py:976] (6/7) Epoch 17, batch 5250, loss[loss=0.1772, simple_loss=0.2565, pruned_loss=0.04892, over 4885.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2545, pruned_loss=0.0571, over 957379.39 frames. ], batch size: 35, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:15:01,424 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1283, 1.5208, 0.7703, 2.0952, 2.5736, 1.7776, 1.8879, 1.9461], device='cuda:6'), covar=tensor([0.1363, 0.1964, 0.1983, 0.1060, 0.1580, 0.1845, 0.1299, 0.1926], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:15:21,072 INFO [finetune.py:976] (6/7) Epoch 17, batch 5300, loss[loss=0.1481, simple_loss=0.2354, pruned_loss=0.03035, over 4730.00 frames. ], tot_loss[loss=0.1844, simple_loss=0.2551, pruned_loss=0.05685, over 958936.45 frames. ], batch size: 27, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:15:30,009 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.644e+02 1.959e+02 2.379e+02 3.599e+02, threshold=3.918e+02, percent-clipped=0.0 2023-03-26 21:15:48,615 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 21:16:05,184 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=96980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:17,642 INFO [finetune.py:976] (6/7) Epoch 17, batch 5350, loss[loss=0.1807, simple_loss=0.2564, pruned_loss=0.05255, over 4733.00 frames. ], tot_loss[loss=0.1845, simple_loss=0.2553, pruned_loss=0.05687, over 956908.84 frames. ], batch size: 54, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:16:25,302 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97000.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:44,614 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97028.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:54,581 INFO [finetune.py:976] (6/7) Epoch 17, batch 5400, loss[loss=0.1918, simple_loss=0.2573, pruned_loss=0.06311, over 4823.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2521, pruned_loss=0.05589, over 957155.55 frames. ], batch size: 39, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:16:56,510 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97046.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:16:58,799 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.591e+02 1.860e+02 2.348e+02 4.043e+02, threshold=3.721e+02, percent-clipped=1.0 2023-03-26 21:17:06,265 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97061.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:17:27,397 INFO [finetune.py:976] (6/7) Epoch 17, batch 5450, loss[loss=0.1659, simple_loss=0.2321, pruned_loss=0.04988, over 4695.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2499, pruned_loss=0.05564, over 956959.03 frames. ], batch size: 23, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:17:28,071 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97094.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:17:37,259 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8979, 1.1825, 1.9492, 1.8586, 1.6815, 1.6377, 1.7900, 1.7888], device='cuda:6'), covar=tensor([0.3684, 0.3959, 0.3341, 0.3439, 0.4696, 0.3473, 0.4059, 0.3113], device='cuda:6'), in_proj_covar=tensor([0.0250, 0.0241, 0.0260, 0.0275, 0.0274, 0.0248, 0.0284, 0.0242], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:17:52,428 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97132.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:18:00,547 INFO [finetune.py:976] (6/7) Epoch 17, batch 5500, loss[loss=0.1852, simple_loss=0.2534, pruned_loss=0.05851, over 4851.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2465, pruned_loss=0.05466, over 955673.05 frames. ], batch size: 47, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:18:04,744 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.221e+01 1.533e+02 1.769e+02 2.042e+02 3.202e+02, threshold=3.539e+02, percent-clipped=0.0 2023-03-26 21:18:33,721 INFO [finetune.py:976] (6/7) Epoch 17, batch 5550, loss[loss=0.1912, simple_loss=0.2618, pruned_loss=0.06037, over 4828.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2471, pruned_loss=0.05471, over 955694.24 frames. ], batch size: 39, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:18:34,943 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3040, 2.2997, 1.9877, 2.4208, 2.9912, 2.3290, 2.4495, 1.7496], device='cuda:6'), covar=tensor([0.2000, 0.1903, 0.1830, 0.1541, 0.1657, 0.1068, 0.1843, 0.1970], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0191, 0.0241, 0.0186, 0.0215, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:19:05,111 INFO [finetune.py:976] (6/7) Epoch 17, batch 5600, loss[loss=0.2124, simple_loss=0.2924, pruned_loss=0.06622, over 4803.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2508, pruned_loss=0.05559, over 955450.23 frames. ], batch size: 51, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:19:09,080 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.641e+02 1.914e+02 2.406e+02 4.422e+02, threshold=3.827e+02, percent-clipped=1.0 2023-03-26 21:19:14,864 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97260.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:19:18,898 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5467, 1.6754, 2.2903, 1.8017, 1.9056, 4.2513, 1.7117, 1.8246], device='cuda:6'), covar=tensor([0.0994, 0.1783, 0.1034, 0.1035, 0.1498, 0.0197, 0.1437, 0.1749], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0078, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 21:19:34,291 INFO [finetune.py:976] (6/7) Epoch 17, batch 5650, loss[loss=0.1673, simple_loss=0.2555, pruned_loss=0.03951, over 4760.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2526, pruned_loss=0.05549, over 954318.85 frames. ], batch size: 28, lr: 3.37e-03, grad_scale: 32.0 2023-03-26 21:19:51,442 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97321.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:00,547 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7374, 4.1071, 4.3379, 4.5203, 4.4589, 4.2295, 4.8149, 2.0735], device='cuda:6'), covar=tensor([0.0716, 0.0870, 0.0848, 0.0956, 0.1228, 0.1397, 0.0598, 0.4753], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0245, 0.0277, 0.0291, 0.0336, 0.0281, 0.0299, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:20:04,106 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97342.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:04,603 INFO [finetune.py:976] (6/7) Epoch 17, batch 5700, loss[loss=0.1448, simple_loss=0.2087, pruned_loss=0.04042, over 4113.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2482, pruned_loss=0.05439, over 933005.76 frames. ], batch size: 18, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:20:07,046 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97347.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:08,737 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.528e+01 1.533e+02 1.739e+02 2.212e+02 3.283e+02, threshold=3.478e+02, percent-clipped=0.0 2023-03-26 21:20:12,309 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97356.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:20:35,934 INFO [finetune.py:976] (6/7) Epoch 18, batch 0, loss[loss=0.2294, simple_loss=0.295, pruned_loss=0.08187, over 4818.00 frames. ], tot_loss[loss=0.2294, simple_loss=0.295, pruned_loss=0.08187, over 4818.00 frames. ], batch size: 47, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:20:35,934 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 21:20:46,783 INFO [finetune.py:1010] (6/7) Epoch 18, validation: loss=0.1584, simple_loss=0.2281, pruned_loss=0.0444, over 2265189.00 frames. 2023-03-26 21:20:46,783 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 21:20:49,166 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97374.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:21:26,110 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97403.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:21:34,092 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97408.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:21:47,680 INFO [finetune.py:976] (6/7) Epoch 18, batch 50, loss[loss=0.1978, simple_loss=0.2631, pruned_loss=0.06623, over 4845.00 frames. ], tot_loss[loss=0.1886, simple_loss=0.2569, pruned_loss=0.06015, over 216060.50 frames. ], batch size: 44, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:21:58,964 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:22:00,772 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97435.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:22:09,807 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.163e+01 1.563e+02 1.902e+02 2.308e+02 3.615e+02, threshold=3.804e+02, percent-clipped=1.0 2023-03-26 21:22:25,131 INFO [finetune.py:976] (6/7) Epoch 18, batch 100, loss[loss=0.1449, simple_loss=0.2177, pruned_loss=0.03607, over 4816.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2477, pruned_loss=0.05481, over 381601.50 frames. ], batch size: 39, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:22:31,666 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97480.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:22:33,614 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.01 vs. limit=5.0 2023-03-26 21:22:35,379 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.31 vs. limit=5.0 2023-03-26 21:22:40,369 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 21:22:58,722 INFO [finetune.py:976] (6/7) Epoch 18, batch 150, loss[loss=0.1516, simple_loss=0.2243, pruned_loss=0.03945, over 4910.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2436, pruned_loss=0.05458, over 510047.03 frames. ], batch size: 37, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:23:13,605 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6482, 3.7373, 3.5809, 1.9271, 3.8599, 2.8206, 0.9197, 2.7839], device='cuda:6'), covar=tensor([0.2695, 0.1894, 0.1487, 0.3189, 0.0980, 0.1019, 0.4355, 0.1458], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0175, 0.0158, 0.0128, 0.0158, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 21:23:17,210 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.559e+02 1.792e+02 2.227e+02 6.409e+02, threshold=3.584e+02, percent-clipped=2.0 2023-03-26 21:23:20,356 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:23:32,348 INFO [finetune.py:976] (6/7) Epoch 18, batch 200, loss[loss=0.1813, simple_loss=0.2502, pruned_loss=0.05619, over 4865.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2457, pruned_loss=0.05611, over 609674.93 frames. ], batch size: 49, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:24:01,855 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:01,916 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97616.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:05,308 INFO [finetune.py:976] (6/7) Epoch 18, batch 250, loss[loss=0.1718, simple_loss=0.2529, pruned_loss=0.04539, over 4899.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2498, pruned_loss=0.05671, over 686866.57 frames. ], batch size: 43, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:24:24,122 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.114e+02 1.615e+02 1.960e+02 2.417e+02 4.168e+02, threshold=3.921e+02, percent-clipped=3.0 2023-03-26 21:24:27,897 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97656.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:37,890 INFO [finetune.py:976] (6/7) Epoch 18, batch 300, loss[loss=0.199, simple_loss=0.2664, pruned_loss=0.0658, over 4772.00 frames. ], tot_loss[loss=0.1837, simple_loss=0.2529, pruned_loss=0.05727, over 748514.60 frames. ], batch size: 27, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:24:40,300 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97674.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:24:45,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4734, 1.0778, 0.8099, 1.3638, 1.9270, 0.7483, 1.1820, 1.3261], device='cuda:6'), covar=tensor([0.1545, 0.2185, 0.1769, 0.1228, 0.1940, 0.2100, 0.1584, 0.2060], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:24:56,233 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97698.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:24:58,733 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4197, 2.2490, 1.9012, 2.2930, 2.2937, 2.0465, 2.6626, 2.3783], device='cuda:6'), covar=tensor([0.1330, 0.2136, 0.2903, 0.2586, 0.2492, 0.1616, 0.2514, 0.1869], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0188, 0.0235, 0.0253, 0.0245, 0.0202, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:24:59,262 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97703.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:24:59,861 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97704.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:25:00,548 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8139, 1.7064, 1.4757, 1.3391, 1.8452, 1.5544, 1.8357, 1.8092], device='cuda:6'), covar=tensor([0.1333, 0.1938, 0.2886, 0.2558, 0.2445, 0.1720, 0.2723, 0.1817], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0187, 0.0234, 0.0253, 0.0245, 0.0202, 0.0214, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:25:10,594 INFO [finetune.py:976] (6/7) Epoch 18, batch 350, loss[loss=0.1909, simple_loss=0.2746, pruned_loss=0.05361, over 4791.00 frames. ], tot_loss[loss=0.1874, simple_loss=0.2565, pruned_loss=0.05918, over 792120.77 frames. ], batch size: 59, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:25:11,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8337, 1.4183, 1.8191, 1.8605, 1.6096, 1.5945, 1.8032, 1.7051], device='cuda:6'), covar=tensor([0.4130, 0.3991, 0.3426, 0.3804, 0.4978, 0.3894, 0.4653, 0.3231], device='cuda:6'), in_proj_covar=tensor([0.0248, 0.0239, 0.0259, 0.0273, 0.0272, 0.0247, 0.0281, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:25:17,533 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97730.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:25:21,092 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97735.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:25:30,595 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.611e+02 1.882e+02 2.387e+02 3.928e+02, threshold=3.763e+02, percent-clipped=1.0 2023-03-26 21:25:43,300 INFO [finetune.py:976] (6/7) Epoch 18, batch 400, loss[loss=0.1656, simple_loss=0.2412, pruned_loss=0.04498, over 4859.00 frames. ], tot_loss[loss=0.1867, simple_loss=0.2569, pruned_loss=0.05828, over 828910.34 frames. ], batch size: 31, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:26:07,007 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7266, 3.8704, 3.6744, 1.8536, 3.9651, 2.9074, 0.7702, 2.8047], device='cuda:6'), covar=tensor([0.2275, 0.1710, 0.1400, 0.3260, 0.0894, 0.0951, 0.4503, 0.1297], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0178, 0.0160, 0.0129, 0.0161, 0.0124, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 21:26:22,831 INFO [finetune.py:976] (6/7) Epoch 18, batch 450, loss[loss=0.1578, simple_loss=0.2335, pruned_loss=0.04103, over 4771.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2545, pruned_loss=0.0571, over 858383.41 frames. ], batch size: 28, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:26:23,540 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1175, 1.4336, 0.9113, 2.0707, 2.4600, 1.8864, 1.7614, 2.0234], device='cuda:6'), covar=tensor([0.1351, 0.2017, 0.1927, 0.1051, 0.1633, 0.1777, 0.1334, 0.1851], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:27:01,042 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.519e+02 1.772e+02 2.128e+02 3.513e+02, threshold=3.544e+02, percent-clipped=0.0 2023-03-26 21:27:16,934 INFO [finetune.py:976] (6/7) Epoch 18, batch 500, loss[loss=0.181, simple_loss=0.2473, pruned_loss=0.05739, over 4825.00 frames. ], tot_loss[loss=0.1832, simple_loss=0.2524, pruned_loss=0.05698, over 878865.14 frames. ], batch size: 41, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:27:25,103 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3824, 1.2925, 2.0802, 3.0911, 1.9775, 2.3613, 1.0729, 2.6511], device='cuda:6'), covar=tensor([0.2081, 0.2159, 0.1542, 0.1002, 0.1044, 0.1870, 0.2157, 0.0701], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0134, 0.0164, 0.0100, 0.0136, 0.0123, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:27:29,748 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=97888.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:27:44,610 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=97911.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:27:47,687 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97916.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:27:50,634 INFO [finetune.py:976] (6/7) Epoch 18, batch 550, loss[loss=0.1668, simple_loss=0.2421, pruned_loss=0.04576, over 4818.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2493, pruned_loss=0.05652, over 895307.67 frames. ], batch size: 33, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:28:19,281 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=97949.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:19,733 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.294e+01 1.530e+02 1.816e+02 2.060e+02 3.951e+02, threshold=3.633e+02, percent-clipped=3.0 2023-03-26 21:28:21,126 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8047, 1.6797, 1.4898, 1.8964, 2.4105, 1.8351, 1.6438, 1.4921], device='cuda:6'), covar=tensor([0.2410, 0.2179, 0.2222, 0.1740, 0.1787, 0.1425, 0.2527, 0.2031], device='cuda:6'), in_proj_covar=tensor([0.0243, 0.0209, 0.0213, 0.0192, 0.0242, 0.0187, 0.0216, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:28:28,265 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=97964.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:32,532 INFO [finetune.py:976] (6/7) Epoch 18, batch 600, loss[loss=0.2126, simple_loss=0.277, pruned_loss=0.07411, over 4869.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2492, pruned_loss=0.05623, over 910356.02 frames. ], batch size: 34, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:28:47,279 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1408, 1.2949, 1.3443, 0.6972, 1.2910, 1.4967, 1.5647, 1.2181], device='cuda:6'), covar=tensor([0.0954, 0.0640, 0.0485, 0.0537, 0.0490, 0.0639, 0.0359, 0.0782], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0123, 0.0126, 0.0129, 0.0128, 0.0142, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.0904e-05, 1.0867e-04, 8.7769e-05, 8.9685e-05, 9.1087e-05, 9.2212e-05, 1.0221e-04, 1.0569e-04], device='cuda:6') 2023-03-26 21:28:51,971 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=97998.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:56,784 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98003.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:28:57,545 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.68 vs. limit=5.0 2023-03-26 21:29:07,659 INFO [finetune.py:976] (6/7) Epoch 18, batch 650, loss[loss=0.1912, simple_loss=0.2598, pruned_loss=0.0613, over 4821.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2503, pruned_loss=0.0559, over 921209.67 frames. ], batch size: 40, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:29:13,279 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98030.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:29:13,312 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98030.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:25,509 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98046.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:28,301 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.543e+02 1.878e+02 2.128e+02 3.672e+02, threshold=3.757e+02, percent-clipped=1.0 2023-03-26 21:29:28,991 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98051.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:29:41,537 INFO [finetune.py:976] (6/7) Epoch 18, batch 700, loss[loss=0.2027, simple_loss=0.2682, pruned_loss=0.06863, over 4817.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2522, pruned_loss=0.05639, over 929881.05 frames. ], batch size: 33, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:29:45,870 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98078.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:30:03,971 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0140, 1.7359, 2.0233, 2.0215, 1.7870, 1.7829, 1.9737, 1.9006], device='cuda:6'), covar=tensor([0.4418, 0.4066, 0.3407, 0.4228, 0.5067, 0.4077, 0.4805, 0.3173], device='cuda:6'), in_proj_covar=tensor([0.0249, 0.0240, 0.0259, 0.0275, 0.0273, 0.0247, 0.0283, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:30:05,712 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98106.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:30:06,093 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-26 21:30:15,235 INFO [finetune.py:976] (6/7) Epoch 18, batch 750, loss[loss=0.2265, simple_loss=0.3074, pruned_loss=0.07284, over 4885.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2534, pruned_loss=0.05658, over 934262.16 frames. ], batch size: 43, lr: 3.36e-03, grad_scale: 64.0 2023-03-26 21:30:19,034 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 21:30:31,043 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.77 vs. limit=5.0 2023-03-26 21:30:34,725 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.501e+02 1.785e+02 2.309e+02 4.193e+02, threshold=3.569e+02, percent-clipped=2.0 2023-03-26 21:30:46,058 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98167.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:30:48,361 INFO [finetune.py:976] (6/7) Epoch 18, batch 800, loss[loss=0.1752, simple_loss=0.2377, pruned_loss=0.05635, over 4883.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2522, pruned_loss=0.05558, over 940393.72 frames. ], batch size: 32, lr: 3.36e-03, grad_scale: 32.0 2023-03-26 21:31:15,634 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98211.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:31:22,190 INFO [finetune.py:976] (6/7) Epoch 18, batch 850, loss[loss=0.161, simple_loss=0.2259, pruned_loss=0.04803, over 4827.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2502, pruned_loss=0.05508, over 944772.66 frames. ], batch size: 25, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:31:45,835 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98244.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:31:55,530 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.155e+02 1.511e+02 1.794e+02 2.111e+02 3.360e+02, threshold=3.589e+02, percent-clipped=0.0 2023-03-26 21:32:07,091 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98259.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:32:15,533 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8385, 1.7214, 1.4841, 1.8879, 2.1882, 1.8816, 1.5031, 1.5343], device='cuda:6'), covar=tensor([0.1806, 0.1705, 0.1682, 0.1318, 0.1576, 0.1132, 0.2343, 0.1689], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0212, 0.0191, 0.0242, 0.0187, 0.0215, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:32:25,142 INFO [finetune.py:976] (6/7) Epoch 18, batch 900, loss[loss=0.1458, simple_loss=0.2279, pruned_loss=0.03183, over 4881.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2476, pruned_loss=0.05434, over 947279.02 frames. ], batch size: 34, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:33:02,857 INFO [finetune.py:976] (6/7) Epoch 18, batch 950, loss[loss=0.1794, simple_loss=0.2533, pruned_loss=0.05271, over 4911.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2462, pruned_loss=0.05433, over 948340.10 frames. ], batch size: 36, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:33:10,005 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98330.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 21:33:23,082 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.511e+02 1.760e+02 2.160e+02 3.441e+02, threshold=3.521e+02, percent-clipped=0.0 2023-03-26 21:33:37,317 INFO [finetune.py:976] (6/7) Epoch 18, batch 1000, loss[loss=0.2155, simple_loss=0.2933, pruned_loss=0.06886, over 4904.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2485, pruned_loss=0.05493, over 951823.78 frames. ], batch size: 35, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:33:42,166 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98378.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 21:34:03,288 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98410.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:34:06,240 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1341, 1.2976, 1.2542, 1.2785, 1.4056, 2.4531, 1.2629, 1.4200], device='cuda:6'), covar=tensor([0.1050, 0.2004, 0.1108, 0.1011, 0.1758, 0.0376, 0.1573, 0.1840], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0078], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 21:34:14,798 INFO [finetune.py:976] (6/7) Epoch 18, batch 1050, loss[loss=0.1815, simple_loss=0.2514, pruned_loss=0.05586, over 4829.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2498, pruned_loss=0.05468, over 950990.19 frames. ], batch size: 30, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:34:26,451 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2989, 2.2113, 1.7518, 2.3023, 2.1934, 1.9027, 2.6259, 2.2986], device='cuda:6'), covar=tensor([0.1373, 0.2129, 0.2984, 0.2439, 0.2588, 0.1759, 0.2495, 0.1814], device='cuda:6'), in_proj_covar=tensor([0.0184, 0.0187, 0.0235, 0.0254, 0.0245, 0.0203, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:34:36,629 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.558e+02 1.856e+02 2.287e+02 5.753e+02, threshold=3.713e+02, percent-clipped=4.0 2023-03-26 21:34:44,674 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98462.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:34:51,513 INFO [finetune.py:976] (6/7) Epoch 18, batch 1100, loss[loss=0.1621, simple_loss=0.2402, pruned_loss=0.04197, over 4865.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2514, pruned_loss=0.05545, over 952100.45 frames. ], batch size: 31, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:34:51,651 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98471.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:35:08,491 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8100, 2.0637, 1.7082, 1.6263, 2.4035, 2.3471, 2.0143, 2.0193], device='cuda:6'), covar=tensor([0.0404, 0.0359, 0.0545, 0.0382, 0.0289, 0.0498, 0.0386, 0.0398], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0108, 0.0144, 0.0113, 0.0101, 0.0109, 0.0099, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.5148e-05, 8.3451e-05, 1.1378e-04, 8.6666e-05, 7.8786e-05, 8.0849e-05, 7.4099e-05, 8.4446e-05], device='cuda:6') 2023-03-26 21:35:14,988 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8062, 1.3027, 1.8478, 1.7968, 1.5993, 1.5284, 1.7608, 1.6651], device='cuda:6'), covar=tensor([0.4082, 0.3919, 0.3164, 0.3437, 0.4830, 0.3655, 0.4106, 0.3029], device='cuda:6'), in_proj_covar=tensor([0.0248, 0.0240, 0.0259, 0.0274, 0.0273, 0.0248, 0.0283, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:35:24,233 INFO [finetune.py:976] (6/7) Epoch 18, batch 1150, loss[loss=0.1595, simple_loss=0.233, pruned_loss=0.043, over 4726.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.252, pruned_loss=0.05583, over 951501.38 frames. ], batch size: 23, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:35:39,053 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98543.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:35:39,640 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98544.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:35:43,756 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.197e+02 1.665e+02 1.912e+02 2.323e+02 4.830e+02, threshold=3.825e+02, percent-clipped=3.0 2023-03-26 21:35:57,262 INFO [finetune.py:976] (6/7) Epoch 18, batch 1200, loss[loss=0.1684, simple_loss=0.2436, pruned_loss=0.04663, over 4799.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2512, pruned_loss=0.05572, over 951824.71 frames. ], batch size: 51, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:35:59,141 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7653, 1.0222, 1.6947, 1.6287, 1.4797, 1.4029, 1.5271, 1.6752], device='cuda:6'), covar=tensor([0.3871, 0.3996, 0.3601, 0.3826, 0.5048, 0.4279, 0.4352, 0.3410], device='cuda:6'), in_proj_covar=tensor([0.0249, 0.0241, 0.0259, 0.0275, 0.0274, 0.0248, 0.0284, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:36:12,054 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98592.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:36:19,568 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98604.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:36:31,241 INFO [finetune.py:976] (6/7) Epoch 18, batch 1250, loss[loss=0.17, simple_loss=0.2293, pruned_loss=0.05536, over 4903.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2493, pruned_loss=0.05571, over 955032.91 frames. ], batch size: 32, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:37:01,601 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.074e+02 1.465e+02 1.816e+02 2.333e+02 4.053e+02, threshold=3.632e+02, percent-clipped=1.0 2023-03-26 21:37:29,819 INFO [finetune.py:976] (6/7) Epoch 18, batch 1300, loss[loss=0.1968, simple_loss=0.2619, pruned_loss=0.06582, over 4761.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2466, pruned_loss=0.05453, over 955082.67 frames. ], batch size: 28, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:37:54,845 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0727, 1.8902, 1.4911, 0.6417, 1.7352, 1.7599, 1.6232, 1.8704], device='cuda:6'), covar=tensor([0.1028, 0.0800, 0.1470, 0.1830, 0.1242, 0.1956, 0.2072, 0.0727], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0196, 0.0201, 0.0183, 0.0214, 0.0208, 0.0223, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:38:07,979 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1072, 1.2182, 1.4172, 0.9957, 1.2024, 1.3417, 1.1965, 1.4739], device='cuda:6'), covar=tensor([0.1490, 0.2268, 0.1410, 0.1605, 0.1130, 0.1458, 0.2838, 0.1153], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0202, 0.0191, 0.0189, 0.0175, 0.0211, 0.0214, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:38:11,397 INFO [finetune.py:976] (6/7) Epoch 18, batch 1350, loss[loss=0.1598, simple_loss=0.2333, pruned_loss=0.04316, over 4892.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2462, pruned_loss=0.05433, over 955414.67 frames. ], batch size: 35, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:38:16,736 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9036, 1.9017, 1.7336, 1.8595, 1.5589, 4.5836, 1.6821, 2.1251], device='cuda:6'), covar=tensor([0.3207, 0.2375, 0.2096, 0.2260, 0.1572, 0.0156, 0.2392, 0.1253], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0121, 0.0122, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 21:38:31,462 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.676e+02 1.899e+02 2.271e+02 3.691e+02, threshold=3.798e+02, percent-clipped=1.0 2023-03-26 21:38:38,225 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=98762.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:38:41,110 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98766.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:38:41,137 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5648, 1.5289, 2.2424, 1.8844, 1.7971, 4.0910, 1.6287, 1.7628], device='cuda:6'), covar=tensor([0.0950, 0.1766, 0.1081, 0.1007, 0.1574, 0.0182, 0.1418, 0.1721], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 21:38:44,610 INFO [finetune.py:976] (6/7) Epoch 18, batch 1400, loss[loss=0.1944, simple_loss=0.278, pruned_loss=0.05535, over 4821.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2497, pruned_loss=0.05547, over 954671.53 frames. ], batch size: 38, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:38:46,495 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 21:39:04,088 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6356, 2.2986, 2.8057, 1.7361, 2.6872, 2.8971, 2.1197, 3.0449], device='cuda:6'), covar=tensor([0.1308, 0.1865, 0.1611, 0.2347, 0.0934, 0.1497, 0.2526, 0.0932], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0203, 0.0192, 0.0191, 0.0176, 0.0213, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:39:04,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5609, 1.5366, 1.3709, 1.5359, 1.9514, 1.8308, 1.5599, 1.3501], device='cuda:6'), covar=tensor([0.0313, 0.0309, 0.0624, 0.0327, 0.0199, 0.0492, 0.0334, 0.0396], device='cuda:6'), in_proj_covar=tensor([0.0096, 0.0107, 0.0143, 0.0111, 0.0100, 0.0108, 0.0098, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.4502e-05, 8.2511e-05, 1.1313e-04, 8.5763e-05, 7.7652e-05, 8.0067e-05, 7.3422e-05, 8.3809e-05], device='cuda:6') 2023-03-26 21:39:10,805 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=98810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:39:12,638 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3856, 1.5989, 1.2429, 1.5186, 1.8601, 1.7662, 1.5792, 1.4346], device='cuda:6'), covar=tensor([0.0370, 0.0288, 0.0623, 0.0300, 0.0179, 0.0509, 0.0275, 0.0376], device='cuda:6'), in_proj_covar=tensor([0.0096, 0.0107, 0.0144, 0.0112, 0.0100, 0.0108, 0.0098, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.4592e-05, 8.2613e-05, 1.1332e-04, 8.5820e-05, 7.7717e-05, 8.0128e-05, 7.3544e-05, 8.3886e-05], device='cuda:6') 2023-03-26 21:39:17,933 INFO [finetune.py:976] (6/7) Epoch 18, batch 1450, loss[loss=0.1791, simple_loss=0.2507, pruned_loss=0.05377, over 4842.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2504, pruned_loss=0.05562, over 952418.92 frames. ], batch size: 49, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:39:19,767 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8020, 1.6984, 1.5281, 1.3090, 1.7861, 1.6137, 1.6348, 1.8094], device='cuda:6'), covar=tensor([0.1270, 0.1729, 0.2607, 0.2286, 0.2292, 0.1568, 0.2248, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0187, 0.0234, 0.0254, 0.0245, 0.0202, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:39:22,669 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0478, 1.6563, 2.1554, 2.0408, 1.8042, 1.8106, 1.9760, 1.9510], device='cuda:6'), covar=tensor([0.4188, 0.4332, 0.3143, 0.4257, 0.5311, 0.3994, 0.4929, 0.3181], device='cuda:6'), in_proj_covar=tensor([0.0249, 0.0240, 0.0259, 0.0275, 0.0274, 0.0248, 0.0284, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:39:40,442 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.618e+02 1.947e+02 2.343e+02 3.750e+02, threshold=3.895e+02, percent-clipped=0.0 2023-03-26 21:39:52,510 INFO [finetune.py:976] (6/7) Epoch 18, batch 1500, loss[loss=0.1924, simple_loss=0.2532, pruned_loss=0.06579, over 4710.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2529, pruned_loss=0.05711, over 952828.24 frames. ], batch size: 23, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:40:06,460 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98890.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:10,017 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98895.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:12,872 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=98899.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:26,174 INFO [finetune.py:976] (6/7) Epoch 18, batch 1550, loss[loss=0.2059, simple_loss=0.2663, pruned_loss=0.07275, over 4888.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2535, pruned_loss=0.05656, over 953169.53 frames. ], batch size: 32, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:40:47,923 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.516e+02 1.782e+02 2.221e+02 4.511e+02, threshold=3.564e+02, percent-clipped=1.0 2023-03-26 21:40:48,077 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98951.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:40:51,132 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=98956.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:41:00,139 INFO [finetune.py:976] (6/7) Epoch 18, batch 1600, loss[loss=0.1474, simple_loss=0.2053, pruned_loss=0.04471, over 4925.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2511, pruned_loss=0.05606, over 954287.31 frames. ], batch size: 33, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:41:00,254 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=98971.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:41:13,876 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-26 21:41:28,060 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8979, 1.6765, 1.5327, 1.2635, 1.6495, 1.6606, 1.6074, 2.2066], device='cuda:6'), covar=tensor([0.4142, 0.4181, 0.3344, 0.4016, 0.4130, 0.2451, 0.3940, 0.2002], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0260, 0.0226, 0.0273, 0.0250, 0.0218, 0.0250, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:41:29,213 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4985, 1.3850, 2.0612, 3.2290, 2.0865, 2.4360, 0.9114, 2.8314], device='cuda:6'), covar=tensor([0.2062, 0.2114, 0.1652, 0.1059, 0.1017, 0.1404, 0.2328, 0.0626], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0115, 0.0133, 0.0164, 0.0100, 0.0135, 0.0124, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:41:33,951 INFO [finetune.py:976] (6/7) Epoch 18, batch 1650, loss[loss=0.1741, simple_loss=0.2413, pruned_loss=0.05342, over 3133.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.249, pruned_loss=0.05577, over 953160.65 frames. ], batch size: 13, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:41:41,742 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99032.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:42:02,318 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-26 21:42:02,530 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.819e+01 1.633e+02 2.015e+02 2.343e+02 3.841e+02, threshold=4.030e+02, percent-clipped=3.0 2023-03-26 21:42:03,114 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99051.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:42:22,691 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99066.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:42:25,676 INFO [finetune.py:976] (6/7) Epoch 18, batch 1700, loss[loss=0.1122, simple_loss=0.1894, pruned_loss=0.01754, over 4757.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2485, pruned_loss=0.05628, over 955505.77 frames. ], batch size: 28, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:43:01,005 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99112.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:43:06,384 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99114.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:43:10,653 INFO [finetune.py:976] (6/7) Epoch 18, batch 1750, loss[loss=0.1778, simple_loss=0.2488, pruned_loss=0.05341, over 4797.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2502, pruned_loss=0.05726, over 955299.76 frames. ], batch size: 29, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:43:18,453 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1038, 2.0817, 1.6560, 2.2876, 2.0886, 1.7772, 2.5199, 2.0585], device='cuda:6'), covar=tensor([0.1405, 0.2193, 0.3058, 0.2442, 0.2549, 0.1802, 0.2767, 0.1889], device='cuda:6'), in_proj_covar=tensor([0.0183, 0.0188, 0.0234, 0.0253, 0.0245, 0.0202, 0.0213, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:43:38,846 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.106e+01 1.643e+02 1.917e+02 2.422e+02 4.876e+02, threshold=3.835e+02, percent-clipped=2.0 2023-03-26 21:43:51,913 INFO [finetune.py:976] (6/7) Epoch 18, batch 1800, loss[loss=0.1584, simple_loss=0.2456, pruned_loss=0.03557, over 4811.00 frames. ], tot_loss[loss=0.1833, simple_loss=0.2523, pruned_loss=0.05709, over 955901.35 frames. ], batch size: 51, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:43:53,881 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6768, 1.5953, 1.5756, 1.6213, 1.4187, 3.5996, 1.3724, 1.8476], device='cuda:6'), covar=tensor([0.3312, 0.2494, 0.2104, 0.2337, 0.1562, 0.0192, 0.2649, 0.1316], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0122, 0.0113, 0.0095, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 21:44:10,884 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99199.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:25,719 INFO [finetune.py:976] (6/7) Epoch 18, batch 1850, loss[loss=0.1948, simple_loss=0.2603, pruned_loss=0.06463, over 4825.00 frames. ], tot_loss[loss=0.186, simple_loss=0.2549, pruned_loss=0.05854, over 955092.30 frames. ], batch size: 30, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:44:30,707 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99229.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:42,401 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99246.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:43,022 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99247.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:45,840 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.618e+02 1.939e+02 2.302e+02 3.831e+02, threshold=3.878e+02, percent-clipped=0.0 2023-03-26 21:44:45,943 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99251.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:53,925 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-26 21:44:57,649 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:44:59,346 INFO [finetune.py:976] (6/7) Epoch 18, batch 1900, loss[loss=0.1758, simple_loss=0.2408, pruned_loss=0.05537, over 4774.00 frames. ], tot_loss[loss=0.1857, simple_loss=0.2554, pruned_loss=0.05802, over 953841.06 frames. ], batch size: 26, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:45:11,554 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99290.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:20,585 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 21:45:33,106 INFO [finetune.py:976] (6/7) Epoch 18, batch 1950, loss[loss=0.1613, simple_loss=0.2336, pruned_loss=0.04447, over 4905.00 frames. ], tot_loss[loss=0.1843, simple_loss=0.2541, pruned_loss=0.05729, over 955085.75 frames. ], batch size: 32, lr: 3.35e-03, grad_scale: 32.0 2023-03-26 21:45:35,055 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99324.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:36,833 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99327.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:38,115 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:45:52,709 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.452e+02 1.660e+02 1.987e+02 4.820e+02, threshold=3.320e+02, percent-clipped=1.0 2023-03-26 21:46:06,275 INFO [finetune.py:976] (6/7) Epoch 18, batch 2000, loss[loss=0.1556, simple_loss=0.2318, pruned_loss=0.03967, over 4889.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.251, pruned_loss=0.05645, over 955374.52 frames. ], batch size: 32, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:46:15,391 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99385.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:46:23,702 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-26 21:46:30,184 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99407.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:46:39,603 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99420.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:46:40,079 INFO [finetune.py:976] (6/7) Epoch 18, batch 2050, loss[loss=0.1627, simple_loss=0.2344, pruned_loss=0.04552, over 4870.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2477, pruned_loss=0.05512, over 956731.28 frames. ], batch size: 34, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:46:59,828 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.434e+02 1.742e+02 2.145e+02 5.049e+02, threshold=3.484e+02, percent-clipped=5.0 2023-03-26 21:47:17,710 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99468.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:47:19,434 INFO [finetune.py:976] (6/7) Epoch 18, batch 2100, loss[loss=0.2053, simple_loss=0.2593, pruned_loss=0.07568, over 4864.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.246, pruned_loss=0.05421, over 957950.62 frames. ], batch size: 31, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:47:31,495 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99481.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:47:58,125 INFO [finetune.py:976] (6/7) Epoch 18, batch 2150, loss[loss=0.1536, simple_loss=0.2438, pruned_loss=0.03172, over 4822.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2491, pruned_loss=0.05515, over 955803.54 frames. ], batch size: 40, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:48:08,438 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99529.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 21:48:16,785 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6731, 1.4668, 1.1363, 0.2376, 1.3130, 1.4654, 1.3889, 1.4262], device='cuda:6'), covar=tensor([0.0917, 0.0791, 0.1270, 0.2071, 0.1318, 0.2207, 0.2245, 0.0877], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0196, 0.0201, 0.0184, 0.0215, 0.0210, 0.0224, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:48:28,288 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99546.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:48:35,645 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.546e+02 1.925e+02 2.366e+02 3.688e+02, threshold=3.850e+02, percent-clipped=2.0 2023-03-26 21:48:35,749 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99551.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:48:52,014 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99568.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:48:53,717 INFO [finetune.py:976] (6/7) Epoch 18, batch 2200, loss[loss=0.1578, simple_loss=0.2335, pruned_loss=0.04103, over 4743.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.252, pruned_loss=0.05569, over 956259.78 frames. ], batch size: 59, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:49:03,270 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99585.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:05,756 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99589.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:08,745 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99594.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:11,758 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99599.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:11,825 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2185, 1.9667, 1.4887, 0.5418, 1.7734, 1.8511, 1.6704, 1.7923], device='cuda:6'), covar=tensor([0.0783, 0.0816, 0.1418, 0.1997, 0.1258, 0.2157, 0.2134, 0.0913], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0195, 0.0200, 0.0184, 0.0214, 0.0209, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:49:27,189 INFO [finetune.py:976] (6/7) Epoch 18, batch 2250, loss[loss=0.1505, simple_loss=0.2238, pruned_loss=0.03857, over 4753.00 frames. ], tot_loss[loss=0.1821, simple_loss=0.2523, pruned_loss=0.05595, over 955765.74 frames. ], batch size: 28, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:49:29,581 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:31,405 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99627.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:33,071 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99629.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:39,629 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2258, 2.2062, 1.6375, 2.2245, 2.1848, 1.8594, 2.5540, 2.3060], device='cuda:6'), covar=tensor([0.1223, 0.2194, 0.2977, 0.2582, 0.2334, 0.1647, 0.2954, 0.1629], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0187, 0.0233, 0.0252, 0.0244, 0.0201, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:49:46,340 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:49:46,818 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.914e+01 1.516e+02 1.824e+02 2.092e+02 3.162e+02, threshold=3.647e+02, percent-clipped=0.0 2023-03-26 21:49:55,230 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0175, 1.8540, 1.5451, 1.7558, 1.7126, 1.6848, 1.7988, 2.4836], device='cuda:6'), covar=tensor([0.3621, 0.4125, 0.3092, 0.3690, 0.4243, 0.2376, 0.3807, 0.1612], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0261, 0.0227, 0.0275, 0.0251, 0.0219, 0.0251, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:50:00,832 INFO [finetune.py:976] (6/7) Epoch 18, batch 2300, loss[loss=0.2047, simple_loss=0.2655, pruned_loss=0.07197, over 4898.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2534, pruned_loss=0.05598, over 956306.99 frames. ], batch size: 36, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:50:03,812 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99675.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:06,804 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:06,831 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=99680.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:24,613 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99707.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:34,082 INFO [finetune.py:976] (6/7) Epoch 18, batch 2350, loss[loss=0.1582, simple_loss=0.2281, pruned_loss=0.04417, over 4816.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.251, pruned_loss=0.05538, over 955503.91 frames. ], batch size: 38, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:50:47,885 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=99741.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:50:52,115 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.91 vs. limit=2.0 2023-03-26 21:50:54,366 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.551e+02 1.845e+02 2.144e+02 4.060e+02, threshold=3.690e+02, percent-clipped=1.0 2023-03-26 21:50:56,923 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99755.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:51:08,062 INFO [finetune.py:976] (6/7) Epoch 18, batch 2400, loss[loss=0.1769, simple_loss=0.2499, pruned_loss=0.05193, over 4833.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2474, pruned_loss=0.05443, over 955363.73 frames. ], batch size: 38, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:51:11,641 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99776.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:51:23,530 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5750, 1.4896, 1.9277, 2.9411, 1.9838, 2.1920, 1.0840, 2.4989], device='cuda:6'), covar=tensor([0.1630, 0.1357, 0.1265, 0.0627, 0.0801, 0.1120, 0.1545, 0.0492], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0134, 0.0165, 0.0101, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:51:40,157 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 21:51:41,396 INFO [finetune.py:976] (6/7) Epoch 18, batch 2450, loss[loss=0.1878, simple_loss=0.2588, pruned_loss=0.05846, over 4909.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2456, pruned_loss=0.05395, over 957687.19 frames. ], batch size: 37, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:51:43,695 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99824.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:51:57,696 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.37 vs. limit=5.0 2023-03-26 21:52:01,734 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.091e+01 1.409e+02 1.705e+02 2.080e+02 4.896e+02, threshold=3.409e+02, percent-clipped=2.0 2023-03-26 21:52:14,327 INFO [finetune.py:976] (6/7) Epoch 18, batch 2500, loss[loss=0.1708, simple_loss=0.2456, pruned_loss=0.04797, over 4783.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2477, pruned_loss=0.05491, over 954301.38 frames. ], batch size: 25, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:52:26,320 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99885.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:52:50,111 INFO [finetune.py:976] (6/7) Epoch 18, batch 2550, loss[loss=0.1504, simple_loss=0.2266, pruned_loss=0.03713, over 4741.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2514, pruned_loss=0.05599, over 956252.46 frames. ], batch size: 54, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:52:52,504 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:52:52,521 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:52:58,897 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:53:06,653 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=99945.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:53:10,117 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7208, 1.1381, 0.8716, 1.5742, 2.0871, 1.0550, 1.3640, 1.4839], device='cuda:6'), covar=tensor([0.1479, 0.2223, 0.1872, 0.1201, 0.1801, 0.1890, 0.1601, 0.2025], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0096, 0.0110, 0.0092, 0.0120, 0.0094, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 21:53:10,619 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.138e+02 1.647e+02 1.933e+02 2.438e+02 4.501e+02, threshold=3.867e+02, percent-clipped=7.0 2023-03-26 21:53:15,993 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-26 21:53:20,822 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1743, 1.9264, 2.0897, 1.5799, 2.0358, 2.1424, 2.1783, 1.7327], device='cuda:6'), covar=tensor([0.0450, 0.0617, 0.0581, 0.0821, 0.0639, 0.0556, 0.0509, 0.1010], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0134, 0.0141, 0.0121, 0.0123, 0.0139, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:53:29,759 INFO [finetune.py:976] (6/7) Epoch 18, batch 2600, loss[loss=0.1567, simple_loss=0.2368, pruned_loss=0.0383, over 4764.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2528, pruned_loss=0.05629, over 954565.23 frames. ], batch size: 28, lr: 3.34e-03, grad_scale: 32.0 2023-03-26 21:53:30,433 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=99972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:53:40,421 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=99980.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:54:12,878 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100007.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:54:24,616 INFO [finetune.py:976] (6/7) Epoch 18, batch 2650, loss[loss=0.1845, simple_loss=0.2705, pruned_loss=0.04924, over 4898.00 frames. ], tot_loss[loss=0.1841, simple_loss=0.2545, pruned_loss=0.05683, over 954706.20 frames. ], batch size: 37, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:54:29,390 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100028.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:54:35,198 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100036.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:54:42,291 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0867, 2.0328, 2.1062, 1.5441, 2.0847, 2.1558, 2.1438, 1.6644], device='cuda:6'), covar=tensor([0.0505, 0.0595, 0.0626, 0.0797, 0.0636, 0.0588, 0.0574, 0.1083], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0135, 0.0141, 0.0122, 0.0124, 0.0139, 0.0141, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:54:46,209 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.530e+02 1.912e+02 2.295e+02 4.144e+02, threshold=3.823e+02, percent-clipped=1.0 2023-03-26 21:54:56,612 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100068.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:54:58,306 INFO [finetune.py:976] (6/7) Epoch 18, batch 2700, loss[loss=0.1428, simple_loss=0.2259, pruned_loss=0.02985, over 4752.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2536, pruned_loss=0.05661, over 957411.75 frames. ], batch size: 28, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:55:01,885 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100076.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:55:04,447 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.32 vs. limit=5.0 2023-03-26 21:55:15,878 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.48 vs. limit=5.0 2023-03-26 21:55:31,863 INFO [finetune.py:976] (6/7) Epoch 18, batch 2750, loss[loss=0.1966, simple_loss=0.2545, pruned_loss=0.06937, over 4895.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2507, pruned_loss=0.05594, over 956984.91 frames. ], batch size: 35, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:55:33,702 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100124.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:55:33,754 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100124.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:55:52,997 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.510e+02 1.766e+02 2.096e+02 4.575e+02, threshold=3.532e+02, percent-clipped=1.0 2023-03-26 21:55:54,326 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1685, 2.7671, 2.5311, 1.3403, 2.7631, 2.2681, 2.1769, 2.5239], device='cuda:6'), covar=tensor([0.1142, 0.0819, 0.1526, 0.2225, 0.1501, 0.2210, 0.2105, 0.1180], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0193, 0.0198, 0.0182, 0.0212, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:56:05,349 INFO [finetune.py:976] (6/7) Epoch 18, batch 2800, loss[loss=0.1536, simple_loss=0.2228, pruned_loss=0.04217, over 4770.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2462, pruned_loss=0.05403, over 957982.86 frames. ], batch size: 26, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:56:06,018 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100172.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:56:11,428 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9740, 1.3577, 1.8637, 1.8489, 1.7075, 1.7052, 1.8202, 1.8152], device='cuda:6'), covar=tensor([0.4708, 0.4646, 0.4340, 0.4254, 0.5961, 0.4590, 0.5256, 0.4147], device='cuda:6'), in_proj_covar=tensor([0.0249, 0.0239, 0.0259, 0.0274, 0.0273, 0.0248, 0.0284, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:56:20,137 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 21:56:38,931 INFO [finetune.py:976] (6/7) Epoch 18, batch 2850, loss[loss=0.1873, simple_loss=0.2563, pruned_loss=0.05913, over 4913.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2462, pruned_loss=0.0549, over 955835.99 frames. ], batch size: 36, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:56:40,883 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100224.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:56:44,584 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2422, 2.0446, 1.7697, 1.9695, 1.8950, 1.9471, 1.9620, 2.7003], device='cuda:6'), covar=tensor([0.3686, 0.3813, 0.3362, 0.3336, 0.3760, 0.2431, 0.3510, 0.1709], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0226, 0.0272, 0.0249, 0.0218, 0.0249, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:56:49,835 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100238.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:56:54,572 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100245.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:56:59,196 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.635e+02 1.899e+02 2.309e+02 4.393e+02, threshold=3.799e+02, percent-clipped=4.0 2023-03-26 21:57:11,715 INFO [finetune.py:976] (6/7) Epoch 18, batch 2900, loss[loss=0.1643, simple_loss=0.2337, pruned_loss=0.04748, over 4195.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2494, pruned_loss=0.05648, over 954413.11 frames. ], batch size: 65, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:57:12,869 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100272.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:57:26,156 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100293.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:57:30,873 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100299.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:57:35,928 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.48 vs. limit=5.0 2023-03-26 21:57:39,117 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4928, 1.3495, 1.2111, 1.5029, 1.5400, 1.5454, 0.9155, 1.2538], device='cuda:6'), covar=tensor([0.2295, 0.2218, 0.2080, 0.1696, 0.1754, 0.1333, 0.2757, 0.1980], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0211, 0.0190, 0.0240, 0.0185, 0.0215, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 21:57:45,542 INFO [finetune.py:976] (6/7) Epoch 18, batch 2950, loss[loss=0.1818, simple_loss=0.2483, pruned_loss=0.05763, over 4843.00 frames. ], tot_loss[loss=0.1824, simple_loss=0.2514, pruned_loss=0.05665, over 954140.57 frames. ], batch size: 49, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:57:55,266 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:58:06,321 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.490e+02 1.822e+02 2.174e+02 4.072e+02, threshold=3.643e+02, percent-clipped=2.0 2023-03-26 21:58:13,588 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100363.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 21:58:18,523 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 21:58:18,819 INFO [finetune.py:976] (6/7) Epoch 18, batch 3000, loss[loss=0.1339, simple_loss=0.1993, pruned_loss=0.03421, over 3801.00 frames. ], tot_loss[loss=0.1838, simple_loss=0.2531, pruned_loss=0.05722, over 954146.26 frames. ], batch size: 16, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:58:18,820 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 21:58:31,198 INFO [finetune.py:1010] (6/7) Epoch 18, validation: loss=0.1568, simple_loss=0.2261, pruned_loss=0.04375, over 2265189.00 frames. 2023-03-26 21:58:31,198 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 21:58:44,390 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100384.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:59:06,995 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100402.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 21:59:11,357 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 21:59:29,744 INFO [finetune.py:976] (6/7) Epoch 18, batch 3050, loss[loss=0.1673, simple_loss=0.2571, pruned_loss=0.03869, over 4789.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2535, pruned_loss=0.05665, over 953452.92 frames. ], batch size: 29, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 21:59:53,774 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.572e+02 1.917e+02 2.276e+02 3.597e+02, threshold=3.833e+02, percent-clipped=0.0 2023-03-26 22:00:01,164 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100463.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:00:07,398 INFO [finetune.py:976] (6/7) Epoch 18, batch 3100, loss[loss=0.1943, simple_loss=0.2571, pruned_loss=0.06578, over 4846.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2516, pruned_loss=0.05599, over 952652.66 frames. ], batch size: 44, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 22:00:25,945 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.81 vs. limit=2.0 2023-03-26 22:00:40,542 INFO [finetune.py:976] (6/7) Epoch 18, batch 3150, loss[loss=0.1551, simple_loss=0.2208, pruned_loss=0.04474, over 4920.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2493, pruned_loss=0.05519, over 955282.10 frames. ], batch size: 37, lr: 3.34e-03, grad_scale: 16.0 2023-03-26 22:00:53,539 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8794, 1.7364, 1.5108, 1.3415, 1.9329, 1.6517, 1.8033, 1.8498], device='cuda:6'), covar=tensor([0.1380, 0.2068, 0.3078, 0.2588, 0.2534, 0.1677, 0.3047, 0.1775], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0187, 0.0234, 0.0252, 0.0244, 0.0202, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:00:59,250 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4621, 1.4187, 1.7826, 1.6442, 1.5830, 3.2628, 1.3728, 1.5653], device='cuda:6'), covar=tensor([0.0981, 0.1696, 0.1152, 0.1011, 0.1515, 0.0258, 0.1513, 0.1722], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:01:00,942 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.466e+01 1.523e+02 1.835e+02 2.195e+02 4.344e+02, threshold=3.670e+02, percent-clipped=3.0 2023-03-26 22:01:12,892 INFO [finetune.py:976] (6/7) Epoch 18, batch 3200, loss[loss=0.1792, simple_loss=0.2372, pruned_loss=0.06057, over 4816.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2456, pruned_loss=0.05375, over 957159.71 frames. ], batch size: 30, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:01:28,940 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100594.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:01:46,315 INFO [finetune.py:976] (6/7) Epoch 18, batch 3250, loss[loss=0.1573, simple_loss=0.2314, pruned_loss=0.04164, over 4760.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2471, pruned_loss=0.0543, over 955113.29 frames. ], batch size: 27, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:02:08,114 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.502e+02 1.862e+02 2.235e+02 4.464e+02, threshold=3.723e+02, percent-clipped=3.0 2023-03-26 22:02:08,236 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3071, 1.3215, 1.5575, 1.5486, 1.3732, 2.8952, 1.2587, 1.4093], device='cuda:6'), covar=tensor([0.1109, 0.1910, 0.1253, 0.1081, 0.1728, 0.0266, 0.1569, 0.1913], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:02:14,917 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100663.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 22:02:20,159 INFO [finetune.py:976] (6/7) Epoch 18, batch 3300, loss[loss=0.1703, simple_loss=0.2414, pruned_loss=0.04966, over 4933.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2512, pruned_loss=0.0556, over 955127.53 frames. ], batch size: 33, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:02:47,029 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100711.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:02:53,407 INFO [finetune.py:976] (6/7) Epoch 18, batch 3350, loss[loss=0.1175, simple_loss=0.1924, pruned_loss=0.02133, over 4744.00 frames. ], tot_loss[loss=0.1827, simple_loss=0.2527, pruned_loss=0.05634, over 954492.36 frames. ], batch size: 23, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:02:55,517 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 22:02:59,515 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=100730.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:03:14,083 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.901e+01 1.577e+02 1.865e+02 2.249e+02 4.268e+02, threshold=3.731e+02, percent-clipped=3.0 2023-03-26 22:03:18,264 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=100758.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:03:23,244 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1907, 2.1439, 1.8761, 2.1485, 2.0331, 2.0381, 2.0077, 2.8375], device='cuda:6'), covar=tensor([0.3419, 0.4771, 0.3276, 0.4455, 0.4446, 0.2315, 0.4910, 0.1497], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0229, 0.0274, 0.0251, 0.0220, 0.0252, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:03:26,646 INFO [finetune.py:976] (6/7) Epoch 18, batch 3400, loss[loss=0.1561, simple_loss=0.2393, pruned_loss=0.0364, over 4915.00 frames. ], tot_loss[loss=0.1829, simple_loss=0.2533, pruned_loss=0.05626, over 954380.38 frames. ], batch size: 41, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:03:39,073 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9085, 3.9822, 3.6977, 1.9462, 4.0708, 3.0481, 0.8656, 2.8468], device='cuda:6'), covar=tensor([0.2164, 0.1895, 0.1452, 0.3474, 0.0971, 0.1014, 0.4723, 0.1304], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0130, 0.0160, 0.0124, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:03:40,307 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=100791.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:04:13,287 INFO [finetune.py:976] (6/7) Epoch 18, batch 3450, loss[loss=0.1895, simple_loss=0.2473, pruned_loss=0.0658, over 4931.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2524, pruned_loss=0.05541, over 955179.08 frames. ], batch size: 33, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:04:14,017 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4017, 1.4056, 1.2346, 1.3738, 1.7069, 1.6162, 1.4524, 1.2305], device='cuda:6'), covar=tensor([0.0308, 0.0280, 0.0610, 0.0301, 0.0213, 0.0434, 0.0285, 0.0388], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0107, 0.0143, 0.0110, 0.0100, 0.0108, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.4024e-05, 8.2353e-05, 1.1261e-04, 8.4594e-05, 7.8234e-05, 7.9916e-05, 7.3467e-05, 8.3214e-05], device='cuda:6') 2023-03-26 22:04:49,166 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.551e+02 1.762e+02 2.136e+02 3.810e+02, threshold=3.524e+02, percent-clipped=1.0 2023-03-26 22:05:03,927 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.70 vs. limit=5.0 2023-03-26 22:05:04,970 INFO [finetune.py:976] (6/7) Epoch 18, batch 3500, loss[loss=0.1433, simple_loss=0.2216, pruned_loss=0.03255, over 4863.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2483, pruned_loss=0.05372, over 956608.60 frames. ], batch size: 31, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:05:21,050 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=100894.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:05:38,764 INFO [finetune.py:976] (6/7) Epoch 18, batch 3550, loss[loss=0.16, simple_loss=0.225, pruned_loss=0.04746, over 4923.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2469, pruned_loss=0.05407, over 956702.78 frames. ], batch size: 46, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:05:49,114 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2902, 2.1930, 1.8257, 2.3056, 2.2020, 1.9505, 2.6007, 2.2660], device='cuda:6'), covar=tensor([0.1277, 0.2228, 0.2854, 0.2432, 0.2490, 0.1643, 0.2991, 0.1754], device='cuda:6'), in_proj_covar=tensor([0.0182, 0.0186, 0.0232, 0.0251, 0.0243, 0.0201, 0.0212, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:05:52,400 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=100942.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:05:59,322 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.557e+01 1.553e+02 1.835e+02 2.348e+02 4.609e+02, threshold=3.670e+02, percent-clipped=4.0 2023-03-26 22:06:12,132 INFO [finetune.py:976] (6/7) Epoch 18, batch 3600, loss[loss=0.1557, simple_loss=0.2397, pruned_loss=0.0358, over 4820.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2451, pruned_loss=0.05356, over 957406.69 frames. ], batch size: 33, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:06:41,391 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2200, 4.6112, 4.8011, 5.0379, 4.9295, 4.7351, 5.3615, 1.5906], device='cuda:6'), covar=tensor([0.0761, 0.0783, 0.0693, 0.0928, 0.1325, 0.1539, 0.0504, 0.5769], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0244, 0.0278, 0.0291, 0.0334, 0.0282, 0.0301, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:06:46,075 INFO [finetune.py:976] (6/7) Epoch 18, batch 3650, loss[loss=0.1726, simple_loss=0.2558, pruned_loss=0.04469, over 4819.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2475, pruned_loss=0.05423, over 958170.19 frames. ], batch size: 38, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:07:06,791 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.559e+02 1.860e+02 2.177e+02 4.070e+02, threshold=3.719e+02, percent-clipped=1.0 2023-03-26 22:07:10,989 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:07:18,907 INFO [finetune.py:976] (6/7) Epoch 18, batch 3700, loss[loss=0.2157, simple_loss=0.2913, pruned_loss=0.07008, over 4849.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2496, pruned_loss=0.05428, over 956575.01 frames. ], batch size: 44, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:07:28,518 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=101086.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:07:42,615 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=101106.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:07:43,274 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101107.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:07:52,636 INFO [finetune.py:976] (6/7) Epoch 18, batch 3750, loss[loss=0.1882, simple_loss=0.248, pruned_loss=0.06415, over 4865.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2525, pruned_loss=0.05579, over 956679.74 frames. ], batch size: 34, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:08:12,837 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.641e+02 1.814e+02 2.131e+02 4.110e+02, threshold=3.627e+02, percent-clipped=2.0 2023-03-26 22:08:24,438 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101168.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:08:26,163 INFO [finetune.py:976] (6/7) Epoch 18, batch 3800, loss[loss=0.1868, simple_loss=0.2735, pruned_loss=0.05008, over 4901.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.253, pruned_loss=0.05574, over 954403.65 frames. ], batch size: 36, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:08:59,891 INFO [finetune.py:976] (6/7) Epoch 18, batch 3850, loss[loss=0.138, simple_loss=0.214, pruned_loss=0.03104, over 4668.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2515, pruned_loss=0.05529, over 956105.99 frames. ], batch size: 23, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:09:30,994 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.525e+02 1.862e+02 2.180e+02 4.556e+02, threshold=3.724e+02, percent-clipped=2.0 2023-03-26 22:09:57,717 INFO [finetune.py:976] (6/7) Epoch 18, batch 3900, loss[loss=0.2253, simple_loss=0.28, pruned_loss=0.08526, over 4096.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2484, pruned_loss=0.05441, over 955726.47 frames. ], batch size: 65, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:10:41,970 INFO [finetune.py:976] (6/7) Epoch 18, batch 3950, loss[loss=0.2076, simple_loss=0.2741, pruned_loss=0.07057, over 4887.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2446, pruned_loss=0.05263, over 956007.44 frames. ], batch size: 35, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:10:58,790 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5144, 2.3327, 1.6974, 0.7575, 1.9834, 1.9825, 1.8513, 2.0405], device='cuda:6'), covar=tensor([0.0961, 0.0780, 0.1602, 0.2114, 0.1379, 0.2491, 0.2181, 0.0874], device='cuda:6'), in_proj_covar=tensor([0.0166, 0.0190, 0.0197, 0.0180, 0.0209, 0.0205, 0.0220, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:11:02,253 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.470e+02 1.754e+02 2.083e+02 3.090e+02, threshold=3.508e+02, percent-clipped=0.0 2023-03-26 22:11:10,696 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0183, 1.9002, 2.0716, 1.3591, 1.9587, 2.0381, 2.0607, 1.6338], device='cuda:6'), covar=tensor([0.0590, 0.0701, 0.0629, 0.0930, 0.0636, 0.0711, 0.0622, 0.1169], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0135, 0.0140, 0.0120, 0.0123, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:11:15,360 INFO [finetune.py:976] (6/7) Epoch 18, batch 4000, loss[loss=0.1803, simple_loss=0.2512, pruned_loss=0.05467, over 4817.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2455, pruned_loss=0.05352, over 954417.74 frames. ], batch size: 38, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:11:15,702 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 22:11:26,005 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101386.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 22:11:32,761 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7115, 1.6632, 1.5105, 1.9301, 2.0669, 1.8919, 1.3218, 1.4818], device='cuda:6'), covar=tensor([0.2340, 0.2133, 0.2037, 0.1731, 0.1676, 0.1303, 0.2662, 0.2179], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0213, 0.0193, 0.0241, 0.0186, 0.0215, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:11:49,438 INFO [finetune.py:976] (6/7) Epoch 18, batch 4050, loss[loss=0.1891, simple_loss=0.2693, pruned_loss=0.05451, over 4739.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2498, pruned_loss=0.05511, over 954778.74 frames. ], batch size: 59, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:11:58,806 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=101434.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:12:10,021 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.670e+02 2.040e+02 2.363e+02 9.256e+02, threshold=4.080e+02, percent-clipped=2.0 2023-03-26 22:12:17,269 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=101463.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:12:22,998 INFO [finetune.py:976] (6/7) Epoch 18, batch 4100, loss[loss=0.1593, simple_loss=0.2317, pruned_loss=0.04345, over 4784.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2514, pruned_loss=0.05565, over 954222.41 frames. ], batch size: 29, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:12:56,254 INFO [finetune.py:976] (6/7) Epoch 18, batch 4150, loss[loss=0.1792, simple_loss=0.247, pruned_loss=0.05575, over 4189.00 frames. ], tot_loss[loss=0.1825, simple_loss=0.2525, pruned_loss=0.05621, over 953423.08 frames. ], batch size: 65, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:13:16,878 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.507e+01 1.508e+02 1.851e+02 2.208e+02 3.984e+02, threshold=3.702e+02, percent-clipped=0.0 2023-03-26 22:13:29,469 INFO [finetune.py:976] (6/7) Epoch 18, batch 4200, loss[loss=0.2063, simple_loss=0.2673, pruned_loss=0.07268, over 4784.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2524, pruned_loss=0.05553, over 955986.50 frames. ], batch size: 51, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:13:29,557 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2761, 3.6283, 3.8512, 4.0919, 4.0253, 3.7671, 4.3646, 1.3553], device='cuda:6'), covar=tensor([0.0833, 0.0994, 0.0946, 0.0971, 0.1251, 0.1638, 0.0697, 0.5499], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0246, 0.0280, 0.0293, 0.0335, 0.0283, 0.0303, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:13:50,259 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-26 22:14:03,051 INFO [finetune.py:976] (6/7) Epoch 18, batch 4250, loss[loss=0.2532, simple_loss=0.3021, pruned_loss=0.1021, over 4796.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2502, pruned_loss=0.05503, over 956460.61 frames. ], batch size: 45, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:14:24,255 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.580e+02 1.774e+02 2.219e+02 3.425e+02, threshold=3.547e+02, percent-clipped=0.0 2023-03-26 22:14:38,480 INFO [finetune.py:976] (6/7) Epoch 18, batch 4300, loss[loss=0.1489, simple_loss=0.2237, pruned_loss=0.03708, over 4782.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2468, pruned_loss=0.05385, over 955328.17 frames. ], batch size: 29, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:14:39,544 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 22:14:54,352 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101684.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:15:33,160 INFO [finetune.py:976] (6/7) Epoch 18, batch 4350, loss[loss=0.1517, simple_loss=0.2198, pruned_loss=0.04176, over 4789.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2442, pruned_loss=0.05279, over 954751.66 frames. ], batch size: 26, lr: 3.33e-03, grad_scale: 16.0 2023-03-26 22:16:04,624 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.48 vs. limit=5.0 2023-03-26 22:16:05,152 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101745.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:16:10,240 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.272e+01 1.483e+02 1.686e+02 2.087e+02 3.591e+02, threshold=3.373e+02, percent-clipped=1.0 2023-03-26 22:16:16,946 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=101763.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:16:17,033 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-26 22:16:21,724 INFO [finetune.py:976] (6/7) Epoch 18, batch 4400, loss[loss=0.2145, simple_loss=0.2843, pruned_loss=0.07242, over 4936.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2457, pruned_loss=0.05352, over 951917.81 frames. ], batch size: 38, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:16:49,126 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0194, 1.3733, 1.8751, 1.9173, 1.6610, 1.6830, 1.7945, 1.7428], device='cuda:6'), covar=tensor([0.4170, 0.4356, 0.3676, 0.4108, 0.5271, 0.4189, 0.4839, 0.3516], device='cuda:6'), in_proj_covar=tensor([0.0248, 0.0239, 0.0259, 0.0275, 0.0274, 0.0248, 0.0282, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:16:49,627 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=101811.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:16:55,632 INFO [finetune.py:976] (6/7) Epoch 18, batch 4450, loss[loss=0.1603, simple_loss=0.2353, pruned_loss=0.04263, over 4766.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2496, pruned_loss=0.05521, over 952189.34 frames. ], batch size: 26, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:17:16,753 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.606e+02 1.893e+02 2.313e+02 4.401e+02, threshold=3.785e+02, percent-clipped=7.0 2023-03-26 22:17:23,996 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2919, 1.5466, 1.6397, 0.8140, 1.5323, 1.8353, 1.8899, 1.4516], device='cuda:6'), covar=tensor([0.0797, 0.0577, 0.0470, 0.0500, 0.0525, 0.0550, 0.0278, 0.0693], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0152, 0.0124, 0.0127, 0.0131, 0.0130, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.1465e-05, 1.1008e-04, 8.8568e-05, 9.0028e-05, 9.2448e-05, 9.2945e-05, 1.0191e-04, 1.0666e-04], device='cuda:6') 2023-03-26 22:17:29,381 INFO [finetune.py:976] (6/7) Epoch 18, batch 4500, loss[loss=0.1679, simple_loss=0.2164, pruned_loss=0.05967, over 4021.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2514, pruned_loss=0.05603, over 953001.15 frames. ], batch size: 17, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:17:49,799 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.87 vs. limit=5.0 2023-03-26 22:18:02,657 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101920.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:18:03,133 INFO [finetune.py:976] (6/7) Epoch 18, batch 4550, loss[loss=0.1864, simple_loss=0.2556, pruned_loss=0.05857, over 4813.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2525, pruned_loss=0.05607, over 952604.44 frames. ], batch size: 33, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:18:19,088 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3902, 1.3892, 1.7608, 1.7398, 1.5123, 3.3167, 1.4021, 1.5272], device='cuda:6'), covar=tensor([0.1122, 0.2008, 0.1136, 0.1008, 0.1642, 0.0274, 0.1494, 0.1862], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0090, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:18:24,138 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.522e+02 1.794e+02 2.336e+02 4.256e+02, threshold=3.587e+02, percent-clipped=2.0 2023-03-26 22:18:35,619 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3469, 3.7303, 3.9589, 4.1583, 4.1251, 3.7991, 4.4665, 1.5358], device='cuda:6'), covar=tensor([0.0789, 0.0893, 0.0820, 0.0951, 0.1099, 0.1533, 0.0669, 0.5114], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0242, 0.0276, 0.0290, 0.0331, 0.0280, 0.0300, 0.0294], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:18:36,746 INFO [finetune.py:976] (6/7) Epoch 18, batch 4600, loss[loss=0.1903, simple_loss=0.254, pruned_loss=0.0633, over 4841.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2521, pruned_loss=0.05593, over 952052.26 frames. ], batch size: 44, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:18:42,886 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=101981.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:18:47,484 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=101987.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:19:05,873 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1114, 1.8776, 1.7920, 2.0641, 2.5625, 2.0948, 1.9478, 1.6256], device='cuda:6'), covar=tensor([0.2149, 0.2148, 0.1963, 0.1705, 0.1804, 0.1266, 0.2290, 0.1930], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0213, 0.0192, 0.0241, 0.0186, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:19:08,288 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8243, 1.7962, 1.6871, 1.7141, 1.3238, 4.1754, 1.6285, 2.0878], device='cuda:6'), covar=tensor([0.3057, 0.2320, 0.1971, 0.2199, 0.1647, 0.0124, 0.2343, 0.1104], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0114, 0.0096, 0.0096, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:19:11,218 INFO [finetune.py:976] (6/7) Epoch 18, batch 4650, loss[loss=0.153, simple_loss=0.2364, pruned_loss=0.03474, over 4822.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2493, pruned_loss=0.0551, over 952746.22 frames. ], batch size: 38, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:19:13,135 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5641, 1.5194, 2.0884, 3.3189, 2.2722, 2.4085, 1.2233, 2.7583], device='cuda:6'), covar=tensor([0.1762, 0.1445, 0.1333, 0.0585, 0.0786, 0.1267, 0.1713, 0.0472], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0164, 0.0100, 0.0135, 0.0124, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 22:19:23,891 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:19:29,247 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102048.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:19:31,549 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.060e+02 1.564e+02 1.867e+02 2.217e+02 4.281e+02, threshold=3.734e+02, percent-clipped=4.0 2023-03-26 22:19:45,056 INFO [finetune.py:976] (6/7) Epoch 18, batch 4700, loss[loss=0.1366, simple_loss=0.2046, pruned_loss=0.03428, over 4764.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2469, pruned_loss=0.05448, over 953484.63 frames. ], batch size: 28, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:20:12,729 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-26 22:20:13,862 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8880, 1.7174, 1.5381, 1.2758, 1.6334, 1.6468, 1.6621, 2.2285], device='cuda:6'), covar=tensor([0.3840, 0.3590, 0.3061, 0.3351, 0.3630, 0.2270, 0.3221, 0.1722], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0260, 0.0227, 0.0272, 0.0249, 0.0218, 0.0249, 0.0230], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:20:30,456 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 22:20:31,469 INFO [finetune.py:976] (6/7) Epoch 18, batch 4750, loss[loss=0.1918, simple_loss=0.2588, pruned_loss=0.0624, over 4918.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2442, pruned_loss=0.05322, over 955998.75 frames. ], batch size: 38, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:20:46,691 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1620, 2.1455, 1.8884, 2.2611, 2.7271, 2.2576, 2.0443, 1.6142], device='cuda:6'), covar=tensor([0.1985, 0.1696, 0.1683, 0.1437, 0.1574, 0.1091, 0.2057, 0.1864], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0213, 0.0193, 0.0241, 0.0187, 0.0215, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:20:56,163 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.538e+02 1.904e+02 2.326e+02 4.380e+02, threshold=3.807e+02, percent-clipped=2.0 2023-03-26 22:21:23,316 INFO [finetune.py:976] (6/7) Epoch 18, batch 4800, loss[loss=0.2045, simple_loss=0.2792, pruned_loss=0.06491, over 4900.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2466, pruned_loss=0.05429, over 955166.83 frames. ], batch size: 35, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:22:00,409 INFO [finetune.py:976] (6/7) Epoch 18, batch 4850, loss[loss=0.1659, simple_loss=0.2448, pruned_loss=0.04355, over 4930.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2498, pruned_loss=0.05456, over 955127.10 frames. ], batch size: 33, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:22:20,216 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.897e+01 1.478e+02 1.803e+02 2.234e+02 3.533e+02, threshold=3.606e+02, percent-clipped=0.0 2023-03-26 22:22:23,462 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 22:22:26,351 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 22:22:29,405 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-26 22:22:33,609 INFO [finetune.py:976] (6/7) Epoch 18, batch 4900, loss[loss=0.1876, simple_loss=0.2529, pruned_loss=0.06117, over 4700.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2523, pruned_loss=0.05586, over 954173.09 frames. ], batch size: 23, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:22:37,664 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102276.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:22:39,923 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.73 vs. limit=5.0 2023-03-26 22:22:58,819 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0290, 1.6919, 2.4556, 4.0568, 2.7986, 2.7201, 0.8824, 3.5170], device='cuda:6'), covar=tensor([0.1671, 0.1534, 0.1345, 0.0551, 0.0726, 0.1467, 0.1997, 0.0370], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0164, 0.0100, 0.0135, 0.0124, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 22:23:06,927 INFO [finetune.py:976] (6/7) Epoch 18, batch 4950, loss[loss=0.1423, simple_loss=0.2136, pruned_loss=0.03551, over 4698.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2535, pruned_loss=0.05634, over 952670.66 frames. ], batch size: 23, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:23:14,083 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4688, 1.3676, 1.3865, 1.3629, 0.8386, 2.2808, 0.7719, 1.2591], device='cuda:6'), covar=tensor([0.3194, 0.2371, 0.2075, 0.2290, 0.1877, 0.0367, 0.2661, 0.1200], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0114, 0.0096, 0.0096, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:23:19,539 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102340.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:23:21,352 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=102343.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:23:25,693 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.64 vs. limit=5.0 2023-03-26 22:23:26,706 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.970e+01 1.449e+02 1.847e+02 2.188e+02 4.191e+02, threshold=3.694e+02, percent-clipped=1.0 2023-03-26 22:23:40,085 INFO [finetune.py:976] (6/7) Epoch 18, batch 5000, loss[loss=0.1953, simple_loss=0.2488, pruned_loss=0.07087, over 4730.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2516, pruned_loss=0.05599, over 952157.63 frames. ], batch size: 23, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:23:51,810 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=102388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:24:13,296 INFO [finetune.py:976] (6/7) Epoch 18, batch 5050, loss[loss=0.1581, simple_loss=0.2281, pruned_loss=0.04407, over 4913.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2486, pruned_loss=0.05477, over 952950.72 frames. ], batch size: 36, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:24:21,521 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9288, 3.4265, 3.6549, 3.5327, 3.4972, 3.2883, 3.9855, 1.3716], device='cuda:6'), covar=tensor([0.1423, 0.1730, 0.1487, 0.2298, 0.2072, 0.2470, 0.1461, 0.7529], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0243, 0.0278, 0.0290, 0.0332, 0.0282, 0.0302, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:24:33,972 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.223e+01 1.582e+02 1.966e+02 2.354e+02 3.513e+02, threshold=3.932e+02, percent-clipped=0.0 2023-03-26 22:24:46,890 INFO [finetune.py:976] (6/7) Epoch 18, batch 5100, loss[loss=0.1188, simple_loss=0.1873, pruned_loss=0.02513, over 4703.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2456, pruned_loss=0.05368, over 954894.37 frames. ], batch size: 23, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:24:56,974 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1093, 4.5793, 4.4354, 2.5772, 4.6853, 3.7128, 1.0356, 3.3064], device='cuda:6'), covar=tensor([0.2396, 0.2549, 0.1441, 0.3292, 0.0923, 0.0866, 0.5011, 0.1554], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0180, 0.0163, 0.0131, 0.0164, 0.0126, 0.0151, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:25:06,489 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1888, 2.1040, 2.1844, 2.1242, 1.7139, 3.9787, 2.0273, 2.5643], device='cuda:6'), covar=tensor([0.2837, 0.2100, 0.1729, 0.2021, 0.1444, 0.0185, 0.2031, 0.0985], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0114, 0.0119, 0.0123, 0.0113, 0.0095, 0.0096, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:25:20,567 INFO [finetune.py:976] (6/7) Epoch 18, batch 5150, loss[loss=0.242, simple_loss=0.3019, pruned_loss=0.09107, over 4061.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2481, pruned_loss=0.05591, over 953972.39 frames. ], batch size: 65, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:25:53,892 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.631e+02 1.989e+02 2.441e+02 4.766e+02, threshold=3.977e+02, percent-clipped=3.0 2023-03-26 22:26:14,504 INFO [finetune.py:976] (6/7) Epoch 18, batch 5200, loss[loss=0.1468, simple_loss=0.217, pruned_loss=0.03825, over 4250.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2512, pruned_loss=0.05674, over 951770.54 frames. ], batch size: 18, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:26:22,153 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102576.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:26:53,476 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.02 vs. limit=5.0 2023-03-26 22:26:56,624 INFO [finetune.py:976] (6/7) Epoch 18, batch 5250, loss[loss=0.1819, simple_loss=0.27, pruned_loss=0.04695, over 4818.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2518, pruned_loss=0.05633, over 948486.06 frames. ], batch size: 51, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:26:56,755 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3167, 2.2537, 2.8182, 1.5722, 2.5101, 2.6646, 2.1336, 2.8399], device='cuda:6'), covar=tensor([0.1485, 0.1889, 0.1526, 0.2408, 0.1014, 0.1794, 0.2642, 0.1012], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0204, 0.0190, 0.0190, 0.0176, 0.0214, 0.0218, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:26:58,553 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=102624.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:27:11,989 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=102643.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:27:17,760 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.182e+02 1.670e+02 1.981e+02 2.217e+02 4.217e+02, threshold=3.962e+02, percent-clipped=1.0 2023-03-26 22:27:29,390 INFO [finetune.py:976] (6/7) Epoch 18, batch 5300, loss[loss=0.2134, simple_loss=0.2799, pruned_loss=0.07341, over 4915.00 frames. ], tot_loss[loss=0.1836, simple_loss=0.2539, pruned_loss=0.05668, over 949209.78 frames. ], batch size: 38, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:27:44,024 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=102691.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:28:03,084 INFO [finetune.py:976] (6/7) Epoch 18, batch 5350, loss[loss=0.1857, simple_loss=0.2593, pruned_loss=0.05603, over 4889.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2524, pruned_loss=0.05565, over 950999.80 frames. ], batch size: 43, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:28:13,429 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 22:28:25,303 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.195e+02 1.536e+02 1.872e+02 2.228e+02 4.473e+02, threshold=3.745e+02, percent-clipped=1.0 2023-03-26 22:28:27,946 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.12 vs. limit=2.0 2023-03-26 22:28:36,909 INFO [finetune.py:976] (6/7) Epoch 18, batch 5400, loss[loss=0.1776, simple_loss=0.2429, pruned_loss=0.05612, over 4840.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2495, pruned_loss=0.05459, over 951297.56 frames. ], batch size: 47, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:28:44,202 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0837, 1.9465, 1.6871, 1.9537, 1.8540, 1.8366, 1.8509, 2.6406], device='cuda:6'), covar=tensor([0.3899, 0.4445, 0.3262, 0.4276, 0.4377, 0.2595, 0.4068, 0.1670], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0229, 0.0276, 0.0252, 0.0221, 0.0252, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:28:49,601 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-26 22:29:10,819 INFO [finetune.py:976] (6/7) Epoch 18, batch 5450, loss[loss=0.1823, simple_loss=0.247, pruned_loss=0.05887, over 4929.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2463, pruned_loss=0.05355, over 951251.07 frames. ], batch size: 33, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:29:17,887 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.29 vs. limit=5.0 2023-03-26 22:29:29,172 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5422, 1.4988, 1.4841, 1.5601, 1.1023, 3.2917, 1.2490, 1.6691], device='cuda:6'), covar=tensor([0.3346, 0.2448, 0.2186, 0.2378, 0.1818, 0.0215, 0.2649, 0.1284], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0096, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:29:30,990 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=102851.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:29:31,449 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.551e+01 1.511e+02 1.793e+02 2.102e+02 5.113e+02, threshold=3.586e+02, percent-clipped=1.0 2023-03-26 22:29:34,130 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 22:29:44,411 INFO [finetune.py:976] (6/7) Epoch 18, batch 5500, loss[loss=0.1717, simple_loss=0.2448, pruned_loss=0.04927, over 4910.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.244, pruned_loss=0.05281, over 952751.05 frames. ], batch size: 43, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:29:54,091 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=102887.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:29:55,123 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-26 22:30:12,701 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102912.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:30:18,101 INFO [finetune.py:976] (6/7) Epoch 18, batch 5550, loss[loss=0.2124, simple_loss=0.2918, pruned_loss=0.0665, over 4844.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2452, pruned_loss=0.05383, over 950784.08 frames. ], batch size: 44, lr: 3.32e-03, grad_scale: 32.0 2023-03-26 22:30:36,219 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=102948.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:30:39,503 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.559e+02 1.902e+02 2.213e+02 4.520e+02, threshold=3.805e+02, percent-clipped=3.0 2023-03-26 22:30:50,053 INFO [finetune.py:976] (6/7) Epoch 18, batch 5600, loss[loss=0.1723, simple_loss=0.2452, pruned_loss=0.04965, over 4928.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2479, pruned_loss=0.05424, over 949537.08 frames. ], batch size: 38, lr: 3.32e-03, grad_scale: 16.0 2023-03-26 22:31:42,252 INFO [finetune.py:976] (6/7) Epoch 18, batch 5650, loss[loss=0.1714, simple_loss=0.2494, pruned_loss=0.0467, over 4804.00 frames. ], tot_loss[loss=0.1816, simple_loss=0.2519, pruned_loss=0.05559, over 952440.07 frames. ], batch size: 51, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:31:44,119 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8529, 4.0671, 3.7572, 2.1594, 4.1138, 3.2270, 0.8833, 2.7845], device='cuda:6'), covar=tensor([0.2192, 0.2007, 0.1410, 0.3175, 0.1008, 0.0898, 0.4454, 0.1603], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0158, 0.0128, 0.0159, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:31:45,373 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7294, 3.6957, 3.5089, 1.9680, 3.7962, 2.9256, 0.8107, 2.5569], device='cuda:6'), covar=tensor([0.2791, 0.1992, 0.1423, 0.3188, 0.0925, 0.0988, 0.4396, 0.1609], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0128, 0.0159, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:32:09,153 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.800e+01 1.506e+02 1.835e+02 2.191e+02 3.638e+02, threshold=3.670e+02, percent-clipped=0.0 2023-03-26 22:32:19,853 INFO [finetune.py:976] (6/7) Epoch 18, batch 5700, loss[loss=0.1334, simple_loss=0.1982, pruned_loss=0.03434, over 4194.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2475, pruned_loss=0.0547, over 932923.23 frames. ], batch size: 18, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:32:48,063 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103098.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:32:48,575 INFO [finetune.py:976] (6/7) Epoch 19, batch 0, loss[loss=0.2126, simple_loss=0.2924, pruned_loss=0.06641, over 4906.00 frames. ], tot_loss[loss=0.2126, simple_loss=0.2924, pruned_loss=0.06641, over 4906.00 frames. ], batch size: 46, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:32:48,575 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 22:33:03,099 INFO [finetune.py:1010] (6/7) Epoch 19, validation: loss=0.1586, simple_loss=0.2282, pruned_loss=0.04454, over 2265189.00 frames. 2023-03-26 22:33:03,100 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 22:33:25,834 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103132.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:33:33,033 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 22:33:38,061 INFO [finetune.py:976] (6/7) Epoch 19, batch 50, loss[loss=0.1862, simple_loss=0.2466, pruned_loss=0.06285, over 4868.00 frames. ], tot_loss[loss=0.1849, simple_loss=0.2558, pruned_loss=0.05697, over 217531.75 frames. ], batch size: 34, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:33:40,498 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.779e+01 1.456e+02 1.782e+02 2.150e+02 3.860e+02, threshold=3.565e+02, percent-clipped=1.0 2023-03-26 22:33:42,745 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 22:33:44,745 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103159.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:33:58,480 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 22:34:07,711 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103193.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:34:11,664 INFO [finetune.py:976] (6/7) Epoch 19, batch 100, loss[loss=0.1828, simple_loss=0.2527, pruned_loss=0.05648, over 4871.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2487, pruned_loss=0.05539, over 382863.08 frames. ], batch size: 34, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:34:17,534 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103207.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:34:24,157 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.54 vs. limit=5.0 2023-03-26 22:34:29,379 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-26 22:34:33,052 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9461, 1.3434, 1.9602, 1.8880, 1.7006, 1.6665, 1.8135, 1.7923], device='cuda:6'), covar=tensor([0.3534, 0.3891, 0.3099, 0.3391, 0.4513, 0.3612, 0.4025, 0.2952], device='cuda:6'), in_proj_covar=tensor([0.0250, 0.0241, 0.0260, 0.0276, 0.0275, 0.0249, 0.0284, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:34:41,202 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103243.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:34:45,334 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7640, 3.3678, 3.1965, 1.8467, 3.4391, 2.6131, 0.8268, 2.3568], device='cuda:6'), covar=tensor([0.2219, 0.2151, 0.1774, 0.3319, 0.1268, 0.1117, 0.4811, 0.1795], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0128, 0.0159, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:34:45,882 INFO [finetune.py:976] (6/7) Epoch 19, batch 150, loss[loss=0.1669, simple_loss=0.2438, pruned_loss=0.04496, over 4922.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2421, pruned_loss=0.05227, over 510990.68 frames. ], batch size: 37, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:34:48,707 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.463e+02 1.787e+02 2.269e+02 3.542e+02, threshold=3.573e+02, percent-clipped=0.0 2023-03-26 22:35:03,448 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9105, 1.3934, 1.9392, 1.8799, 1.6250, 1.5743, 1.8363, 1.7737], device='cuda:6'), covar=tensor([0.3293, 0.3572, 0.2797, 0.3224, 0.4203, 0.3384, 0.3643, 0.2616], device='cuda:6'), in_proj_covar=tensor([0.0250, 0.0241, 0.0260, 0.0276, 0.0274, 0.0249, 0.0284, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:35:19,727 INFO [finetune.py:976] (6/7) Epoch 19, batch 200, loss[loss=0.1836, simple_loss=0.2437, pruned_loss=0.06176, over 4824.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2434, pruned_loss=0.05338, over 609527.42 frames. ], batch size: 30, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:35:34,079 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2149, 4.8342, 4.5430, 3.1030, 4.8964, 3.7094, 0.9140, 3.3997], device='cuda:6'), covar=tensor([0.2059, 0.1872, 0.1252, 0.2639, 0.0695, 0.0863, 0.4722, 0.1421], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0175, 0.0159, 0.0128, 0.0159, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:35:53,176 INFO [finetune.py:976] (6/7) Epoch 19, batch 250, loss[loss=0.2012, simple_loss=0.2799, pruned_loss=0.06126, over 4813.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2461, pruned_loss=0.05404, over 688155.01 frames. ], batch size: 45, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:35:56,517 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.389e+01 1.572e+02 1.886e+02 2.263e+02 4.128e+02, threshold=3.772e+02, percent-clipped=1.0 2023-03-26 22:36:05,017 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103366.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:36:09,994 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.18 vs. limit=5.0 2023-03-26 22:36:25,724 INFO [finetune.py:976] (6/7) Epoch 19, batch 300, loss[loss=0.1919, simple_loss=0.2697, pruned_loss=0.05705, over 4917.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2502, pruned_loss=0.05504, over 747333.77 frames. ], batch size: 42, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:36:39,208 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 22:37:01,258 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-26 22:37:02,194 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103427.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 22:37:12,459 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 22:37:21,695 INFO [finetune.py:976] (6/7) Epoch 19, batch 350, loss[loss=0.1851, simple_loss=0.2537, pruned_loss=0.05824, over 4842.00 frames. ], tot_loss[loss=0.1834, simple_loss=0.2533, pruned_loss=0.05672, over 792995.17 frames. ], batch size: 49, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:37:28,081 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.554e+02 1.898e+02 2.403e+02 5.343e+02, threshold=3.796e+02, percent-clipped=4.0 2023-03-26 22:37:29,795 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:03,044 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103488.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:10,179 INFO [finetune.py:976] (6/7) Epoch 19, batch 400, loss[loss=0.1577, simple_loss=0.2303, pruned_loss=0.04251, over 4751.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2531, pruned_loss=0.05605, over 830079.39 frames. ], batch size: 28, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:38:13,998 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 22:38:16,110 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103507.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:22,604 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7943, 1.6725, 2.0817, 1.3360, 1.9673, 2.0764, 1.5581, 2.1862], device='cuda:6'), covar=tensor([0.1129, 0.1804, 0.1264, 0.1781, 0.0810, 0.1142, 0.2526, 0.0745], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0202, 0.0189, 0.0188, 0.0173, 0.0212, 0.0216, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:38:24,879 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7079, 1.5354, 2.1460, 3.3645, 2.2601, 2.4472, 1.0510, 2.8313], device='cuda:6'), covar=tensor([0.1520, 0.1416, 0.1249, 0.0536, 0.0753, 0.1248, 0.1715, 0.0419], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0115, 0.0132, 0.0163, 0.0099, 0.0134, 0.0123, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 22:38:39,517 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103543.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:43,097 INFO [finetune.py:976] (6/7) Epoch 19, batch 450, loss[loss=0.2434, simple_loss=0.2952, pruned_loss=0.09578, over 4172.00 frames. ], tot_loss[loss=0.1815, simple_loss=0.252, pruned_loss=0.0555, over 858483.08 frames. ], batch size: 66, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:38:45,991 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.312e+01 1.502e+02 1.696e+02 2.061e+02 2.854e+02, threshold=3.392e+02, percent-clipped=0.0 2023-03-26 22:38:47,239 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103555.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:38:51,356 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-26 22:39:04,052 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-26 22:39:19,609 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103591.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:39:24,373 INFO [finetune.py:976] (6/7) Epoch 19, batch 500, loss[loss=0.1935, simple_loss=0.2487, pruned_loss=0.06917, over 4762.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2496, pruned_loss=0.05539, over 877194.89 frames. ], batch size: 28, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:39:57,690 INFO [finetune.py:976] (6/7) Epoch 19, batch 550, loss[loss=0.2023, simple_loss=0.2591, pruned_loss=0.07278, over 4726.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2464, pruned_loss=0.054, over 896087.27 frames. ], batch size: 54, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:40:00,581 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.543e+02 1.834e+02 2.179e+02 4.966e+02, threshold=3.668e+02, percent-clipped=2.0 2023-03-26 22:40:03,717 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0835, 1.4720, 0.6800, 2.0614, 2.5005, 1.7773, 1.9707, 2.0031], device='cuda:6'), covar=tensor([0.1472, 0.2048, 0.2213, 0.1081, 0.1793, 0.1878, 0.1283, 0.1898], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0110, 0.0091, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 22:40:31,340 INFO [finetune.py:976] (6/7) Epoch 19, batch 600, loss[loss=0.212, simple_loss=0.2884, pruned_loss=0.06775, over 4820.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2481, pruned_loss=0.05463, over 909900.20 frames. ], batch size: 38, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:40:36,194 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.10 vs. limit=2.0 2023-03-26 22:40:47,254 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=103722.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:40:55,069 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0321, 1.7121, 2.1259, 2.0224, 1.8036, 1.8086, 1.9557, 1.9262], device='cuda:6'), covar=tensor([0.4241, 0.4162, 0.3192, 0.4148, 0.5369, 0.4075, 0.5047, 0.3131], device='cuda:6'), in_proj_covar=tensor([0.0250, 0.0241, 0.0260, 0.0276, 0.0275, 0.0249, 0.0284, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:40:58,707 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103740.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:04,044 INFO [finetune.py:976] (6/7) Epoch 19, batch 650, loss[loss=0.1369, simple_loss=0.2062, pruned_loss=0.03377, over 4809.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2501, pruned_loss=0.05484, over 920950.40 frames. ], batch size: 25, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:41:06,468 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.539e+02 1.795e+02 2.169e+02 3.837e+02, threshold=3.591e+02, percent-clipped=1.0 2023-03-26 22:41:07,218 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103754.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:15,385 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103765.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:30,849 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=103788.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:37,406 INFO [finetune.py:976] (6/7) Epoch 19, batch 700, loss[loss=0.1678, simple_loss=0.2465, pruned_loss=0.04459, over 4210.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2511, pruned_loss=0.05477, over 926385.01 frames. ], batch size: 65, lr: 3.31e-03, grad_scale: 16.0 2023-03-26 22:41:38,278 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9435, 1.9269, 1.9598, 1.2044, 2.0000, 1.9141, 1.9836, 1.6777], device='cuda:6'), covar=tensor([0.0536, 0.0582, 0.0598, 0.0876, 0.0876, 0.0597, 0.0514, 0.1090], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0140, 0.0120, 0.0124, 0.0137, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:41:38,370 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 22:41:38,885 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103801.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:41:39,436 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103802.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:42:02,010 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103826.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:42:13,878 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=103836.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:42:26,321 INFO [finetune.py:976] (6/7) Epoch 19, batch 750, loss[loss=0.1905, simple_loss=0.2563, pruned_loss=0.06234, over 4234.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2522, pruned_loss=0.0553, over 933011.16 frames. ], batch size: 65, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:42:33,271 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.573e+02 1.871e+02 2.192e+02 5.260e+02, threshold=3.742e+02, percent-clipped=2.0 2023-03-26 22:43:03,185 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103876.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:43:22,082 INFO [finetune.py:976] (6/7) Epoch 19, batch 800, loss[loss=0.1775, simple_loss=0.2496, pruned_loss=0.05271, over 4806.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2522, pruned_loss=0.05524, over 939274.31 frames. ], batch size: 41, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:43:30,680 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4660, 1.4727, 1.2146, 1.5229, 1.8377, 1.6460, 1.4333, 1.3524], device='cuda:6'), covar=tensor([0.0326, 0.0342, 0.0648, 0.0306, 0.0230, 0.0573, 0.0343, 0.0424], device='cuda:6'), in_proj_covar=tensor([0.0095, 0.0107, 0.0141, 0.0109, 0.0098, 0.0108, 0.0098, 0.0109], device='cuda:6'), out_proj_covar=tensor([7.3734e-05, 8.2358e-05, 1.1114e-04, 8.3942e-05, 7.6729e-05, 7.9396e-05, 7.3121e-05, 8.3490e-05], device='cuda:6') 2023-03-26 22:43:34,180 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9076, 1.8342, 1.6477, 2.1042, 2.5392, 2.1012, 1.7758, 1.5455], device='cuda:6'), covar=tensor([0.2269, 0.1982, 0.1970, 0.1546, 0.1664, 0.1131, 0.2231, 0.1997], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0192, 0.0241, 0.0186, 0.0214, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:43:48,641 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=103937.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 22:43:55,834 INFO [finetune.py:976] (6/7) Epoch 19, batch 850, loss[loss=0.1831, simple_loss=0.2502, pruned_loss=0.05799, over 4899.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2493, pruned_loss=0.0543, over 942432.30 frames. ], batch size: 32, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:43:58,239 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.499e+02 1.798e+02 2.103e+02 3.961e+02, threshold=3.597e+02, percent-clipped=1.0 2023-03-26 22:44:25,966 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103990.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:44:30,242 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=103997.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:44:31,320 INFO [finetune.py:976] (6/7) Epoch 19, batch 900, loss[loss=0.184, simple_loss=0.2409, pruned_loss=0.06352, over 4799.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.247, pruned_loss=0.05397, over 947146.17 frames. ], batch size: 51, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:44:46,733 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104022.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:44:53,448 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-26 22:45:05,990 INFO [finetune.py:976] (6/7) Epoch 19, batch 950, loss[loss=0.2032, simple_loss=0.2818, pruned_loss=0.06235, over 4935.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2448, pruned_loss=0.05353, over 948680.04 frames. ], batch size: 33, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:45:06,110 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=104049.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:07,329 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104051.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:08,390 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.542e+01 1.532e+02 1.748e+02 2.079e+02 4.067e+02, threshold=3.497e+02, percent-clipped=1.0 2023-03-26 22:45:11,557 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104058.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:18,744 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104070.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:45:36,944 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104096.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:38,695 INFO [finetune.py:976] (6/7) Epoch 19, batch 1000, loss[loss=0.2004, simple_loss=0.2641, pruned_loss=0.06833, over 4819.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.247, pruned_loss=0.05459, over 949901.55 frames. ], batch size: 39, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:45:45,438 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=104110.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:45:52,087 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104121.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:46:12,345 INFO [finetune.py:976] (6/7) Epoch 19, batch 1050, loss[loss=0.156, simple_loss=0.2117, pruned_loss=0.05017, over 4326.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2491, pruned_loss=0.05445, over 951895.39 frames. ], batch size: 19, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:46:14,763 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.591e+02 1.940e+02 2.273e+02 3.456e+02, threshold=3.881e+02, percent-clipped=0.0 2023-03-26 22:46:44,172 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 22:46:48,742 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9070, 4.3832, 4.2020, 2.3900, 4.5011, 3.5088, 0.8552, 3.1441], device='cuda:6'), covar=tensor([0.2370, 0.1850, 0.1372, 0.2885, 0.0904, 0.0804, 0.4446, 0.1407], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0178, 0.0162, 0.0129, 0.0161, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 22:46:53,935 INFO [finetune.py:976] (6/7) Epoch 19, batch 1100, loss[loss=0.2381, simple_loss=0.298, pruned_loss=0.08911, over 4227.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2508, pruned_loss=0.0549, over 952641.41 frames. ], batch size: 65, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:47:07,425 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0062, 2.6925, 2.4674, 1.2950, 2.6144, 2.0658, 2.0274, 2.2839], device='cuda:6'), covar=tensor([0.1070, 0.0833, 0.1877, 0.2101, 0.1879, 0.2194, 0.2015, 0.1228], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0193, 0.0200, 0.0183, 0.0211, 0.0207, 0.0222, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:47:16,922 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104232.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 22:47:29,724 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-26 22:47:39,153 INFO [finetune.py:976] (6/7) Epoch 19, batch 1150, loss[loss=0.1916, simple_loss=0.2646, pruned_loss=0.05929, over 4823.00 frames. ], tot_loss[loss=0.1813, simple_loss=0.2515, pruned_loss=0.05557, over 953493.82 frames. ], batch size: 30, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:47:47,027 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.710e+02 1.992e+02 2.366e+02 4.129e+02, threshold=3.984e+02, percent-clipped=1.0 2023-03-26 22:48:25,526 INFO [finetune.py:976] (6/7) Epoch 19, batch 1200, loss[loss=0.1692, simple_loss=0.239, pruned_loss=0.04965, over 4923.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2495, pruned_loss=0.05492, over 954614.28 frames. ], batch size: 33, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:48:33,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7516, 1.7370, 1.4857, 1.8089, 2.3137, 1.9605, 1.6929, 1.3918], device='cuda:6'), covar=tensor([0.2281, 0.2098, 0.2125, 0.1746, 0.1790, 0.1222, 0.2392, 0.2021], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0215, 0.0194, 0.0243, 0.0188, 0.0216, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:48:50,984 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2950, 3.7622, 3.8899, 4.1075, 4.0597, 3.8103, 4.3698, 1.3665], device='cuda:6'), covar=tensor([0.0764, 0.0775, 0.0857, 0.0989, 0.1116, 0.1381, 0.0620, 0.5459], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0240, 0.0277, 0.0288, 0.0328, 0.0279, 0.0299, 0.0292], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:49:05,635 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104346.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:07,373 INFO [finetune.py:976] (6/7) Epoch 19, batch 1250, loss[loss=0.1547, simple_loss=0.2277, pruned_loss=0.0409, over 4827.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2477, pruned_loss=0.05446, over 956016.55 frames. ], batch size: 33, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:49:10,322 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.979e+01 1.472e+02 1.754e+02 2.218e+02 4.171e+02, threshold=3.509e+02, percent-clipped=1.0 2023-03-26 22:49:10,410 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104353.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:23,367 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-26 22:49:39,164 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104396.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:41,420 INFO [finetune.py:976] (6/7) Epoch 19, batch 1300, loss[loss=0.2271, simple_loss=0.2845, pruned_loss=0.08487, over 4175.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2459, pruned_loss=0.05403, over 954754.86 frames. ], batch size: 18, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:49:45,744 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=104405.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:49:56,437 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104421.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:50:07,348 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4813, 2.2263, 2.8018, 1.8152, 2.5769, 2.7430, 2.0583, 2.9065], device='cuda:6'), covar=tensor([0.1480, 0.2062, 0.1590, 0.2243, 0.0973, 0.1660, 0.2824, 0.1074], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0203, 0.0190, 0.0188, 0.0174, 0.0212, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:50:10,350 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104444.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:50:14,761 INFO [finetune.py:976] (6/7) Epoch 19, batch 1350, loss[loss=0.2528, simple_loss=0.3098, pruned_loss=0.09794, over 4830.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2456, pruned_loss=0.05391, over 954567.57 frames. ], batch size: 38, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:50:17,642 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.617e+02 1.931e+02 2.310e+02 3.973e+02, threshold=3.863e+02, percent-clipped=4.0 2023-03-26 22:50:29,051 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104469.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:50:48,509 INFO [finetune.py:976] (6/7) Epoch 19, batch 1400, loss[loss=0.2185, simple_loss=0.2913, pruned_loss=0.0728, over 4818.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2486, pruned_loss=0.05465, over 954362.62 frames. ], batch size: 51, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:51:10,562 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104532.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 22:51:21,265 INFO [finetune.py:976] (6/7) Epoch 19, batch 1450, loss[loss=0.2116, simple_loss=0.2875, pruned_loss=0.06791, over 4938.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2501, pruned_loss=0.05497, over 954297.67 frames. ], batch size: 33, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:51:24,645 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.650e+02 1.913e+02 2.290e+02 4.485e+02, threshold=3.826e+02, percent-clipped=3.0 2023-03-26 22:51:42,885 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104580.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:52:02,825 INFO [finetune.py:976] (6/7) Epoch 19, batch 1500, loss[loss=0.1801, simple_loss=0.2599, pruned_loss=0.05019, over 4839.00 frames. ], tot_loss[loss=0.182, simple_loss=0.252, pruned_loss=0.05601, over 953039.90 frames. ], batch size: 47, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:52:29,695 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8364, 1.6438, 1.4495, 1.2761, 1.6097, 1.5809, 1.6147, 2.1711], device='cuda:6'), covar=tensor([0.3400, 0.3309, 0.2872, 0.3322, 0.3376, 0.2180, 0.3221, 0.1637], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0230, 0.0275, 0.0251, 0.0221, 0.0252, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:52:33,844 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104646.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:52:35,533 INFO [finetune.py:976] (6/7) Epoch 19, batch 1550, loss[loss=0.174, simple_loss=0.2399, pruned_loss=0.05402, over 4798.00 frames. ], tot_loss[loss=0.1819, simple_loss=0.2519, pruned_loss=0.05598, over 952587.84 frames. ], batch size: 29, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:52:40,200 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.878e+01 1.494e+02 1.864e+02 2.283e+02 3.386e+02, threshold=3.728e+02, percent-clipped=0.0 2023-03-26 22:52:40,302 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104653.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:52:58,593 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6091, 2.4055, 2.9873, 1.8662, 2.6487, 3.0418, 2.1652, 3.1339], device='cuda:6'), covar=tensor([0.1364, 0.1847, 0.1727, 0.2176, 0.1011, 0.1438, 0.2602, 0.0882], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0202, 0.0189, 0.0186, 0.0173, 0.0211, 0.0215, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:53:00,997 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7187, 1.6442, 2.1879, 3.4895, 2.3398, 2.3877, 0.9426, 2.8397], device='cuda:6'), covar=tensor([0.1561, 0.1280, 0.1204, 0.0453, 0.0699, 0.1476, 0.1780, 0.0460], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0115, 0.0131, 0.0162, 0.0098, 0.0134, 0.0122, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 22:53:26,186 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104694.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:53:29,189 INFO [finetune.py:976] (6/7) Epoch 19, batch 1600, loss[loss=0.1965, simple_loss=0.2478, pruned_loss=0.07256, over 3947.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2495, pruned_loss=0.05496, over 951713.00 frames. ], batch size: 17, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:53:35,260 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104701.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:53:38,276 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=104705.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:53:39,476 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1471, 1.9186, 2.3965, 1.6064, 2.2939, 2.4855, 1.7888, 2.6125], device='cuda:6'), covar=tensor([0.1237, 0.1987, 0.1682, 0.2021, 0.0884, 0.1196, 0.2586, 0.0724], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0202, 0.0189, 0.0186, 0.0173, 0.0211, 0.0215, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:54:11,340 INFO [finetune.py:976] (6/7) Epoch 19, batch 1650, loss[loss=0.1572, simple_loss=0.2348, pruned_loss=0.03982, over 4902.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2472, pruned_loss=0.05444, over 952044.32 frames. ], batch size: 35, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:54:13,776 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.513e+02 1.754e+02 2.121e+02 3.523e+02, threshold=3.508e+02, percent-clipped=0.0 2023-03-26 22:54:13,851 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=104753.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 22:54:19,837 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9781, 1.7992, 1.4235, 1.3950, 2.2935, 2.4380, 1.9775, 1.8574], device='cuda:6'), covar=tensor([0.0370, 0.0454, 0.0864, 0.0529, 0.0330, 0.0521, 0.0572, 0.0481], device='cuda:6'), in_proj_covar=tensor([0.0096, 0.0108, 0.0142, 0.0111, 0.0100, 0.0109, 0.0099, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.4595e-05, 8.3291e-05, 1.1213e-04, 8.5178e-05, 7.7653e-05, 8.0687e-05, 7.4076e-05, 8.4558e-05], device='cuda:6') 2023-03-26 22:54:23,420 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5747, 1.4576, 1.9099, 1.8362, 1.6015, 3.5154, 1.3520, 1.5954], device='cuda:6'), covar=tensor([0.0938, 0.1876, 0.1080, 0.0957, 0.1574, 0.0204, 0.1471, 0.1731], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:54:41,154 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4150, 1.4160, 1.2556, 1.4427, 1.6884, 1.6147, 1.4195, 1.2342], device='cuda:6'), covar=tensor([0.0380, 0.0282, 0.0543, 0.0274, 0.0246, 0.0469, 0.0302, 0.0432], device='cuda:6'), in_proj_covar=tensor([0.0096, 0.0108, 0.0142, 0.0111, 0.0099, 0.0109, 0.0099, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.4375e-05, 8.3014e-05, 1.1170e-04, 8.5002e-05, 7.7463e-05, 8.0559e-05, 7.4011e-05, 8.4357e-05], device='cuda:6') 2023-03-26 22:54:44,699 INFO [finetune.py:976] (6/7) Epoch 19, batch 1700, loss[loss=0.1377, simple_loss=0.2102, pruned_loss=0.0326, over 4769.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2455, pruned_loss=0.05378, over 952004.04 frames. ], batch size: 28, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:54:56,267 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 22:55:00,598 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2791, 2.2702, 1.8985, 2.3741, 2.2419, 1.9926, 2.7778, 2.3728], device='cuda:6'), covar=tensor([0.1256, 0.2336, 0.2691, 0.2517, 0.2322, 0.1498, 0.3033, 0.1595], device='cuda:6'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0253, 0.0246, 0.0203, 0.0214, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:55:17,898 INFO [finetune.py:976] (6/7) Epoch 19, batch 1750, loss[loss=0.1397, simple_loss=0.2283, pruned_loss=0.02548, over 4763.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2475, pruned_loss=0.05453, over 951452.70 frames. ], batch size: 28, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:55:20,304 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.630e+02 1.888e+02 2.368e+02 5.925e+02, threshold=3.776e+02, percent-clipped=5.0 2023-03-26 22:55:26,831 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7358, 1.6697, 1.4519, 1.8102, 2.2642, 1.8632, 1.7564, 1.4466], device='cuda:6'), covar=tensor([0.2000, 0.1820, 0.1780, 0.1498, 0.1656, 0.1169, 0.2138, 0.1709], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0209, 0.0213, 0.0193, 0.0242, 0.0187, 0.0215, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:55:28,598 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 22:55:51,607 INFO [finetune.py:976] (6/7) Epoch 19, batch 1800, loss[loss=0.2516, simple_loss=0.3097, pruned_loss=0.09674, over 4840.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2506, pruned_loss=0.05523, over 951788.59 frames. ], batch size: 49, lr: 3.30e-03, grad_scale: 16.0 2023-03-26 22:55:59,616 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3833, 1.4112, 2.1300, 1.7897, 1.7700, 4.0035, 1.3731, 1.6822], device='cuda:6'), covar=tensor([0.1054, 0.1787, 0.1218, 0.1028, 0.1551, 0.0192, 0.1580, 0.1853], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:56:11,258 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-26 22:56:25,139 INFO [finetune.py:976] (6/7) Epoch 19, batch 1850, loss[loss=0.2115, simple_loss=0.2966, pruned_loss=0.06321, over 4907.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2521, pruned_loss=0.05565, over 951854.65 frames. ], batch size: 43, lr: 3.30e-03, grad_scale: 32.0 2023-03-26 22:56:27,537 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.864e+01 1.560e+02 1.785e+02 2.312e+02 4.235e+02, threshold=3.569e+02, percent-clipped=1.0 2023-03-26 22:57:00,524 INFO [finetune.py:976] (6/7) Epoch 19, batch 1900, loss[loss=0.1949, simple_loss=0.2667, pruned_loss=0.06159, over 4878.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2532, pruned_loss=0.05604, over 951992.06 frames. ], batch size: 43, lr: 3.30e-03, grad_scale: 32.0 2023-03-26 22:57:37,614 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1593, 1.9944, 2.1584, 1.4682, 2.0561, 2.1415, 2.2304, 1.6531], device='cuda:6'), covar=tensor([0.0483, 0.0600, 0.0568, 0.0820, 0.0647, 0.0595, 0.0502, 0.1153], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0132, 0.0137, 0.0118, 0.0122, 0.0135, 0.0137, 0.0158], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 22:57:42,255 INFO [finetune.py:976] (6/7) Epoch 19, batch 1950, loss[loss=0.1829, simple_loss=0.254, pruned_loss=0.05594, over 4912.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2515, pruned_loss=0.05527, over 953016.34 frames. ], batch size: 37, lr: 3.30e-03, grad_scale: 32.0 2023-03-26 22:57:44,664 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.372e+01 1.430e+02 1.759e+02 2.099e+02 5.293e+02, threshold=3.517e+02, percent-clipped=3.0 2023-03-26 22:58:31,189 INFO [finetune.py:976] (6/7) Epoch 19, batch 2000, loss[loss=0.1912, simple_loss=0.263, pruned_loss=0.05976, over 4863.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2495, pruned_loss=0.05486, over 953647.02 frames. ], batch size: 34, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 22:59:04,778 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5233, 1.5287, 2.0101, 1.7802, 1.6981, 3.5384, 1.4231, 1.6695], device='cuda:6'), covar=tensor([0.0942, 0.1726, 0.0962, 0.0923, 0.1438, 0.0190, 0.1382, 0.1620], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 22:59:17,046 INFO [finetune.py:976] (6/7) Epoch 19, batch 2050, loss[loss=0.1757, simple_loss=0.2397, pruned_loss=0.05591, over 4827.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2471, pruned_loss=0.0547, over 951681.98 frames. ], batch size: 40, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 22:59:19,897 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.557e+02 1.803e+02 2.317e+02 4.729e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-26 22:59:22,669 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 22:59:50,004 INFO [finetune.py:976] (6/7) Epoch 19, batch 2100, loss[loss=0.161, simple_loss=0.2382, pruned_loss=0.0419, over 4907.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2467, pruned_loss=0.05459, over 953506.59 frames. ], batch size: 36, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:00:01,479 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1522, 1.4444, 0.7401, 1.8830, 2.2901, 1.6222, 1.6487, 1.8300], device='cuda:6'), covar=tensor([0.1424, 0.2133, 0.2249, 0.1131, 0.2044, 0.1997, 0.1470, 0.2045], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0090, 0.0118, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:00:23,800 INFO [finetune.py:976] (6/7) Epoch 19, batch 2150, loss[loss=0.1415, simple_loss=0.1934, pruned_loss=0.0448, over 3996.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2478, pruned_loss=0.05476, over 952016.39 frames. ], batch size: 17, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:00:26,649 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.573e+02 1.937e+02 2.406e+02 5.182e+02, threshold=3.875e+02, percent-clipped=4.0 2023-03-26 23:00:57,385 INFO [finetune.py:976] (6/7) Epoch 19, batch 2200, loss[loss=0.1786, simple_loss=0.2572, pruned_loss=0.05002, over 4812.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2498, pruned_loss=0.05509, over 953248.01 frames. ], batch size: 33, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:01:08,789 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6263, 1.5767, 1.3174, 1.7123, 1.9452, 1.7221, 1.3722, 1.3166], device='cuda:6'), covar=tensor([0.2142, 0.1974, 0.1922, 0.1580, 0.1622, 0.1161, 0.2427, 0.1980], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0208, 0.0212, 0.0191, 0.0241, 0.0186, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:01:30,610 INFO [finetune.py:976] (6/7) Epoch 19, batch 2250, loss[loss=0.2053, simple_loss=0.2741, pruned_loss=0.06824, over 4112.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2519, pruned_loss=0.05576, over 953929.35 frames. ], batch size: 65, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:01:33,463 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.177e+02 1.629e+02 1.918e+02 2.372e+02 6.301e+02, threshold=3.835e+02, percent-clipped=3.0 2023-03-26 23:01:56,209 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105388.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:02:03,218 INFO [finetune.py:976] (6/7) Epoch 19, batch 2300, loss[loss=0.1492, simple_loss=0.2274, pruned_loss=0.03553, over 4765.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2508, pruned_loss=0.05414, over 955745.98 frames. ], batch size: 28, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:02:12,543 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-26 23:02:45,813 INFO [finetune.py:976] (6/7) Epoch 19, batch 2350, loss[loss=0.149, simple_loss=0.2197, pruned_loss=0.03916, over 4830.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2483, pruned_loss=0.05344, over 955084.13 frames. ], batch size: 30, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:02:45,932 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105449.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:02:48,226 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.493e+02 1.723e+02 2.054e+02 4.367e+02, threshold=3.447e+02, percent-clipped=1.0 2023-03-26 23:02:50,836 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-26 23:03:07,784 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4957, 1.5093, 1.9051, 2.9437, 1.9422, 2.3301, 0.9143, 2.4891], device='cuda:6'), covar=tensor([0.1764, 0.1332, 0.1233, 0.0574, 0.0852, 0.1172, 0.1784, 0.0500], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0115, 0.0132, 0.0162, 0.0099, 0.0134, 0.0122, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:03:10,204 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105486.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:03:19,410 INFO [finetune.py:976] (6/7) Epoch 19, batch 2400, loss[loss=0.1289, simple_loss=0.2006, pruned_loss=0.02862, over 4677.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2461, pruned_loss=0.05297, over 956426.47 frames. ], batch size: 23, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:03:23,373 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105502.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:04:14,909 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105547.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:04:15,986 INFO [finetune.py:976] (6/7) Epoch 19, batch 2450, loss[loss=0.2075, simple_loss=0.2594, pruned_loss=0.07778, over 4103.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2433, pruned_loss=0.05222, over 956361.50 frames. ], batch size: 65, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:04:18,405 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.415e+01 1.457e+02 1.742e+02 2.171e+02 6.143e+02, threshold=3.484e+02, percent-clipped=3.0 2023-03-26 23:04:25,665 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105563.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:04:49,908 INFO [finetune.py:976] (6/7) Epoch 19, batch 2500, loss[loss=0.2061, simple_loss=0.2837, pruned_loss=0.06422, over 4899.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2464, pruned_loss=0.05412, over 954753.98 frames. ], batch size: 35, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:05:01,619 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-26 23:05:23,461 INFO [finetune.py:976] (6/7) Epoch 19, batch 2550, loss[loss=0.1802, simple_loss=0.2516, pruned_loss=0.0544, over 4904.00 frames. ], tot_loss[loss=0.18, simple_loss=0.25, pruned_loss=0.055, over 955001.04 frames. ], batch size: 35, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:05:26,385 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.162e+02 1.575e+02 1.924e+02 2.412e+02 4.379e+02, threshold=3.848e+02, percent-clipped=4.0 2023-03-26 23:05:56,906 INFO [finetune.py:976] (6/7) Epoch 19, batch 2600, loss[loss=0.1725, simple_loss=0.2273, pruned_loss=0.05891, over 4341.00 frames. ], tot_loss[loss=0.181, simple_loss=0.2514, pruned_loss=0.05533, over 953431.03 frames. ], batch size: 19, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:06:27,125 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105744.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:06:30,088 INFO [finetune.py:976] (6/7) Epoch 19, batch 2650, loss[loss=0.1549, simple_loss=0.2382, pruned_loss=0.03577, over 4781.00 frames. ], tot_loss[loss=0.1822, simple_loss=0.2528, pruned_loss=0.0558, over 952260.36 frames. ], batch size: 29, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:06:32,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.372e+01 1.545e+02 1.903e+02 2.181e+02 3.189e+02, threshold=3.806e+02, percent-clipped=0.0 2023-03-26 23:06:59,546 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105792.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:03,678 INFO [finetune.py:976] (6/7) Epoch 19, batch 2700, loss[loss=0.205, simple_loss=0.2716, pruned_loss=0.06917, over 4801.00 frames. ], tot_loss[loss=0.1823, simple_loss=0.2526, pruned_loss=0.05603, over 952678.34 frames. ], batch size: 41, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:07:03,806 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=105799.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:28,678 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9249, 1.8115, 1.6385, 1.4641, 1.9924, 1.6904, 1.7818, 1.9306], device='cuda:6'), covar=tensor([0.1354, 0.1656, 0.2877, 0.2259, 0.2373, 0.1553, 0.2695, 0.1745], device='cuda:6'), in_proj_covar=tensor([0.0187, 0.0189, 0.0238, 0.0255, 0.0249, 0.0204, 0.0217, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:07:33,450 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105842.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:37,643 INFO [finetune.py:976] (6/7) Epoch 19, batch 2750, loss[loss=0.1604, simple_loss=0.2271, pruned_loss=0.04687, over 4788.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2491, pruned_loss=0.05498, over 952016.52 frames. ], batch size: 26, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:07:40,095 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.803e+01 1.395e+02 1.671e+02 1.966e+02 3.086e+02, threshold=3.343e+02, percent-clipped=0.0 2023-03-26 23:07:40,232 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105853.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:43,736 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=105858.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:07:45,043 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=105860.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:08:22,328 INFO [finetune.py:976] (6/7) Epoch 19, batch 2800, loss[loss=0.1829, simple_loss=0.256, pruned_loss=0.05493, over 4837.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2457, pruned_loss=0.0535, over 953166.74 frames. ], batch size: 38, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:08:30,304 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5249, 1.5906, 2.0365, 1.8826, 1.8448, 4.1646, 1.5632, 1.7606], device='cuda:6'), covar=tensor([0.0990, 0.1860, 0.1238, 0.0938, 0.1524, 0.0173, 0.1449, 0.1787], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0077, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:08:32,104 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0005, 4.9098, 4.7205, 2.9220, 5.0473, 3.7565, 1.0258, 3.6997], device='cuda:6'), covar=tensor([0.2330, 0.1851, 0.1445, 0.2789, 0.0810, 0.0915, 0.4685, 0.1222], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0178, 0.0161, 0.0130, 0.0162, 0.0124, 0.0148, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:09:03,616 INFO [finetune.py:976] (6/7) Epoch 19, batch 2850, loss[loss=0.1842, simple_loss=0.2678, pruned_loss=0.05028, over 4892.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2449, pruned_loss=0.05332, over 952793.45 frames. ], batch size: 35, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:09:10,696 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.025e+01 1.444e+02 1.775e+02 2.176e+02 4.047e+02, threshold=3.549e+02, percent-clipped=5.0 2023-03-26 23:09:47,019 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9409, 1.3856, 1.9882, 1.9642, 1.7568, 1.7121, 1.8848, 1.8399], device='cuda:6'), covar=tensor([0.3802, 0.4097, 0.3353, 0.3651, 0.4662, 0.3719, 0.4597, 0.3173], device='cuda:6'), in_proj_covar=tensor([0.0251, 0.0240, 0.0260, 0.0278, 0.0276, 0.0250, 0.0285, 0.0242], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:09:49,339 INFO [finetune.py:976] (6/7) Epoch 19, batch 2900, loss[loss=0.2489, simple_loss=0.3065, pruned_loss=0.09562, over 4832.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2485, pruned_loss=0.05486, over 954706.51 frames. ], batch size: 33, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:10:01,098 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.02 vs. limit=5.0 2023-03-26 23:10:20,990 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106044.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:10:24,380 INFO [finetune.py:976] (6/7) Epoch 19, batch 2950, loss[loss=0.1696, simple_loss=0.2413, pruned_loss=0.04898, over 4919.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2505, pruned_loss=0.05508, over 951884.72 frames. ], batch size: 36, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:10:27,320 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.599e+02 1.865e+02 2.251e+02 4.962e+02, threshold=3.729e+02, percent-clipped=1.0 2023-03-26 23:10:35,886 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5982, 4.1713, 3.9671, 2.0817, 4.3253, 3.2976, 0.8064, 3.0218], device='cuda:6'), covar=tensor([0.2979, 0.1595, 0.1394, 0.3220, 0.0803, 0.0864, 0.4562, 0.1323], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0176, 0.0160, 0.0129, 0.0160, 0.0123, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:10:43,296 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106077.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:10:46,275 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7399, 2.5082, 2.0840, 0.9598, 2.2227, 1.9676, 1.9979, 2.2333], device='cuda:6'), covar=tensor([0.0827, 0.0794, 0.1686, 0.2267, 0.1497, 0.2496, 0.2009, 0.1030], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0199, 0.0181, 0.0209, 0.0206, 0.0221, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:10:53,298 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106092.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:10:57,518 INFO [finetune.py:976] (6/7) Epoch 19, batch 3000, loss[loss=0.1985, simple_loss=0.2717, pruned_loss=0.06261, over 4820.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2522, pruned_loss=0.05562, over 952669.28 frames. ], batch size: 39, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:10:57,519 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 23:11:02,038 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8079, 1.6778, 2.0488, 1.3466, 1.8983, 2.0702, 1.6416, 2.1798], device='cuda:6'), covar=tensor([0.1156, 0.2142, 0.1356, 0.1722, 0.0778, 0.1080, 0.2819, 0.0667], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0204, 0.0189, 0.0187, 0.0172, 0.0212, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:11:08,359 INFO [finetune.py:1010] (6/7) Epoch 19, validation: loss=0.1576, simple_loss=0.2259, pruned_loss=0.04462, over 2265189.00 frames. 2023-03-26 23:11:08,360 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 23:11:43,873 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106138.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:46,233 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106142.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:50,370 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106148.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:50,871 INFO [finetune.py:976] (6/7) Epoch 19, batch 3050, loss[loss=0.1826, simple_loss=0.2387, pruned_loss=0.06324, over 4712.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2525, pruned_loss=0.05544, over 952735.69 frames. ], batch size: 23, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:11:53,804 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.577e+02 1.927e+02 2.196e+02 3.458e+02, threshold=3.854e+02, percent-clipped=0.0 2023-03-26 23:11:54,160 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-26 23:11:55,079 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106155.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:11:57,434 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:12:18,599 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106190.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:12:24,007 INFO [finetune.py:976] (6/7) Epoch 19, batch 3100, loss[loss=0.1699, simple_loss=0.2402, pruned_loss=0.04975, over 4781.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2498, pruned_loss=0.05471, over 953255.96 frames. ], batch size: 51, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:12:29,244 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106206.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:12:57,681 INFO [finetune.py:976] (6/7) Epoch 19, batch 3150, loss[loss=0.1585, simple_loss=0.2271, pruned_loss=0.04498, over 4916.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2473, pruned_loss=0.0543, over 954014.13 frames. ], batch size: 46, lr: 3.29e-03, grad_scale: 32.0 2023-03-26 23:13:00,115 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.828e+01 1.675e+02 1.879e+02 2.192e+02 3.916e+02, threshold=3.758e+02, percent-clipped=1.0 2023-03-26 23:13:20,400 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106268.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:13:41,053 INFO [finetune.py:976] (6/7) Epoch 19, batch 3200, loss[loss=0.1788, simple_loss=0.2509, pruned_loss=0.05334, over 4859.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2444, pruned_loss=0.05342, over 954895.16 frames. ], batch size: 34, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:13:58,433 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2239, 1.4602, 0.7681, 1.9964, 2.3943, 1.7167, 1.5704, 1.8426], device='cuda:6'), covar=tensor([0.1240, 0.1945, 0.2090, 0.1025, 0.1790, 0.2049, 0.1434, 0.1906], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0091, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:14:00,216 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8717, 1.7180, 1.6546, 2.0172, 2.0426, 1.9856, 1.3875, 1.6176], device='cuda:6'), covar=tensor([0.1884, 0.1810, 0.1729, 0.1368, 0.1428, 0.1028, 0.2317, 0.1698], device='cuda:6'), in_proj_covar=tensor([0.0241, 0.0209, 0.0211, 0.0192, 0.0241, 0.0186, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:14:01,443 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106329.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:14:05,726 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106336.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:14:06,933 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0048, 1.5339, 0.7634, 1.6798, 2.1840, 1.4462, 1.6058, 1.7203], device='cuda:6'), covar=tensor([0.1347, 0.1938, 0.2108, 0.1165, 0.1863, 0.2029, 0.1423, 0.1936], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0090, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:14:06,942 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4479, 1.5184, 1.9434, 1.7990, 1.6641, 3.6084, 1.4510, 1.6440], device='cuda:6'), covar=tensor([0.0981, 0.1720, 0.1111, 0.0928, 0.1483, 0.0208, 0.1464, 0.1677], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:14:16,902 INFO [finetune.py:976] (6/7) Epoch 19, batch 3250, loss[loss=0.1697, simple_loss=0.2368, pruned_loss=0.05129, over 4179.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2451, pruned_loss=0.05372, over 952917.07 frames. ], batch size: 65, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:14:24,769 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.520e+02 1.839e+02 2.222e+02 4.428e+02, threshold=3.677e+02, percent-clipped=2.0 2023-03-26 23:15:08,396 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106397.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:10,068 INFO [finetune.py:976] (6/7) Epoch 19, batch 3300, loss[loss=0.2007, simple_loss=0.2712, pruned_loss=0.06509, over 4739.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2497, pruned_loss=0.05487, over 955323.87 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:15:25,642 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7011, 3.9925, 3.8463, 1.8846, 4.1751, 3.1726, 0.8069, 2.8790], device='cuda:6'), covar=tensor([0.2514, 0.1903, 0.1461, 0.3290, 0.1014, 0.0911, 0.4547, 0.1457], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0161, 0.0123, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:15:32,154 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5529, 1.5676, 1.5047, 0.9693, 1.7030, 1.9205, 1.9201, 1.4017], device='cuda:6'), covar=tensor([0.1004, 0.0579, 0.0578, 0.0577, 0.0462, 0.0640, 0.0310, 0.0664], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0151, 0.0125, 0.0125, 0.0131, 0.0129, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.1067e-05, 1.0947e-04, 8.9160e-05, 8.8760e-05, 9.2388e-05, 9.2634e-05, 1.0089e-04, 1.0602e-04], device='cuda:6') 2023-03-26 23:15:36,593 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-26 23:15:41,470 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106433.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:51,017 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106448.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:51,546 INFO [finetune.py:976] (6/7) Epoch 19, batch 3350, loss[loss=0.1978, simple_loss=0.2601, pruned_loss=0.06769, over 4927.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.252, pruned_loss=0.05521, over 956645.28 frames. ], batch size: 33, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:15:54,464 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.616e+02 1.883e+02 2.222e+02 4.657e+02, threshold=3.766e+02, percent-clipped=2.0 2023-03-26 23:15:55,202 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=106454.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:15:56,241 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106455.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:31,245 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106496.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:33,552 INFO [finetune.py:976] (6/7) Epoch 19, batch 3400, loss[loss=0.1937, simple_loss=0.2684, pruned_loss=0.05948, over 4848.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2537, pruned_loss=0.05601, over 954465.43 frames. ], batch size: 49, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:16:36,558 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106503.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:16:44,443 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=106515.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:17:06,743 INFO [finetune.py:976] (6/7) Epoch 19, batch 3450, loss[loss=0.2004, simple_loss=0.2722, pruned_loss=0.06432, over 4895.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2522, pruned_loss=0.05525, over 949491.23 frames. ], batch size: 43, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:17:09,628 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.792e+01 1.525e+02 1.781e+02 2.060e+02 3.433e+02, threshold=3.562e+02, percent-clipped=0.0 2023-03-26 23:17:09,776 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5571, 1.3615, 1.2749, 1.5097, 1.6975, 1.5269, 1.0556, 1.3098], device='cuda:6'), covar=tensor([0.2049, 0.2000, 0.1911, 0.1560, 0.1522, 0.1234, 0.2338, 0.1882], device='cuda:6'), in_proj_covar=tensor([0.0240, 0.0208, 0.0210, 0.0191, 0.0240, 0.0186, 0.0213, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:17:40,388 INFO [finetune.py:976] (6/7) Epoch 19, batch 3500, loss[loss=0.1888, simple_loss=0.254, pruned_loss=0.06175, over 4903.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2509, pruned_loss=0.05504, over 951056.51 frames. ], batch size: 35, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:17:42,344 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1827, 1.8588, 2.0062, 1.5199, 2.1742, 2.1768, 2.2118, 1.4051], device='cuda:6'), covar=tensor([0.0685, 0.0976, 0.0801, 0.1101, 0.0632, 0.0768, 0.0704, 0.1763], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0133, 0.0138, 0.0119, 0.0122, 0.0137, 0.0138, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:17:57,675 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106624.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:18:14,143 INFO [finetune.py:976] (6/7) Epoch 19, batch 3550, loss[loss=0.1499, simple_loss=0.2195, pruned_loss=0.04012, over 4787.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2479, pruned_loss=0.05458, over 953494.70 frames. ], batch size: 26, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:18:16,540 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.567e+02 1.861e+02 2.307e+02 3.604e+02, threshold=3.722e+02, percent-clipped=2.0 2023-03-26 23:18:44,005 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.68 vs. limit=2.0 2023-03-26 23:18:51,982 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106692.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:18:56,159 INFO [finetune.py:976] (6/7) Epoch 19, batch 3600, loss[loss=0.1481, simple_loss=0.2176, pruned_loss=0.03929, over 4772.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2457, pruned_loss=0.05372, over 955701.88 frames. ], batch size: 28, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:19:19,206 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106733.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:19:29,858 INFO [finetune.py:976] (6/7) Epoch 19, batch 3650, loss[loss=0.1629, simple_loss=0.2441, pruned_loss=0.04088, over 4763.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2477, pruned_loss=0.0542, over 954672.04 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:19:34,982 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.178e+02 1.609e+02 2.013e+02 2.438e+02 4.457e+02, threshold=4.025e+02, percent-clipped=1.0 2023-03-26 23:19:46,553 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6994, 1.8193, 2.2348, 1.9922, 2.0909, 4.3394, 1.7666, 1.8197], device='cuda:6'), covar=tensor([0.0908, 0.1771, 0.1140, 0.0954, 0.1372, 0.0166, 0.1366, 0.1711], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:20:03,525 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106781.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:20:23,370 INFO [finetune.py:976] (6/7) Epoch 19, batch 3700, loss[loss=0.1551, simple_loss=0.2173, pruned_loss=0.04644, over 3996.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2489, pruned_loss=0.05412, over 952024.37 frames. ], batch size: 17, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:20:32,845 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=106810.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:20:59,510 INFO [finetune.py:976] (6/7) Epoch 19, batch 3750, loss[loss=0.1733, simple_loss=0.2571, pruned_loss=0.04477, over 4898.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2523, pruned_loss=0.05505, over 952671.20 frames. ], batch size: 43, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:21:06,524 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.604e+02 1.833e+02 2.350e+02 4.465e+02, threshold=3.666e+02, percent-clipped=2.0 2023-03-26 23:21:48,083 INFO [finetune.py:976] (6/7) Epoch 19, batch 3800, loss[loss=0.1519, simple_loss=0.2352, pruned_loss=0.03433, over 4900.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2531, pruned_loss=0.05527, over 952109.85 frames. ], batch size: 46, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:21:58,872 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8973, 1.3313, 1.9688, 1.9023, 1.7056, 1.6914, 1.7846, 1.8468], device='cuda:6'), covar=tensor([0.3802, 0.4018, 0.3325, 0.3524, 0.4901, 0.3767, 0.4648, 0.3087], device='cuda:6'), in_proj_covar=tensor([0.0249, 0.0237, 0.0258, 0.0276, 0.0274, 0.0248, 0.0282, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:22:07,682 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106924.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:22:16,664 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 23:22:23,589 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2213, 2.8782, 2.7530, 1.2481, 2.9820, 2.1691, 0.8652, 1.9014], device='cuda:6'), covar=tensor([0.2595, 0.2437, 0.1839, 0.3664, 0.1469, 0.1301, 0.4072, 0.1662], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0179, 0.0162, 0.0131, 0.0162, 0.0124, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:22:24,711 INFO [finetune.py:976] (6/7) Epoch 19, batch 3850, loss[loss=0.2106, simple_loss=0.2743, pruned_loss=0.07343, over 4735.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2519, pruned_loss=0.05487, over 952656.37 frames. ], batch size: 59, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:22:27,158 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.119e+02 1.623e+02 1.818e+02 2.255e+02 6.115e+02, threshold=3.637e+02, percent-clipped=1.0 2023-03-26 23:22:39,186 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=106972.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:22:52,751 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=106992.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:22:57,317 INFO [finetune.py:976] (6/7) Epoch 19, batch 3900, loss[loss=0.1853, simple_loss=0.2388, pruned_loss=0.06594, over 4248.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.25, pruned_loss=0.05481, over 953333.32 frames. ], batch size: 65, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:23:24,565 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107040.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:23:24,579 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5994, 3.8954, 3.6657, 1.7864, 3.9162, 2.8994, 0.8069, 2.7161], device='cuda:6'), covar=tensor([0.2547, 0.1976, 0.1629, 0.3668, 0.1349, 0.1139, 0.5135, 0.1611], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0178, 0.0161, 0.0130, 0.0162, 0.0124, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:23:29,940 INFO [finetune.py:976] (6/7) Epoch 19, batch 3950, loss[loss=0.1777, simple_loss=0.2497, pruned_loss=0.05283, over 4914.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2468, pruned_loss=0.05351, over 954953.70 frames. ], batch size: 36, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:23:35,024 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.301e+01 1.519e+02 1.802e+02 2.250e+02 5.271e+02, threshold=3.605e+02, percent-clipped=1.0 2023-03-26 23:24:12,981 INFO [finetune.py:976] (6/7) Epoch 19, batch 4000, loss[loss=0.2205, simple_loss=0.2807, pruned_loss=0.08013, over 4195.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2484, pruned_loss=0.05488, over 955154.21 frames. ], batch size: 65, lr: 3.28e-03, grad_scale: 64.0 2023-03-26 23:24:21,406 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107110.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:24:40,474 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3807, 2.2382, 2.3986, 1.5739, 2.3361, 2.3183, 2.3847, 1.8895], device='cuda:6'), covar=tensor([0.0539, 0.0620, 0.0569, 0.0900, 0.0629, 0.0730, 0.0609, 0.1092], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0134, 0.0139, 0.0120, 0.0124, 0.0138, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:24:41,078 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107140.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:24:46,975 INFO [finetune.py:976] (6/7) Epoch 19, batch 4050, loss[loss=0.1444, simple_loss=0.2155, pruned_loss=0.03668, over 4761.00 frames. ], tot_loss[loss=0.1795, simple_loss=0.2494, pruned_loss=0.05479, over 953496.12 frames. ], batch size: 27, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:24:48,821 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107152.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:24:49,865 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.579e+02 1.895e+02 2.231e+02 3.900e+02, threshold=3.790e+02, percent-clipped=1.0 2023-03-26 23:24:52,893 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107158.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:25:02,235 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4814, 1.3531, 1.3155, 1.3938, 0.7758, 2.9177, 1.0332, 1.3596], device='cuda:6'), covar=tensor([0.3319, 0.2806, 0.2310, 0.2558, 0.2179, 0.0250, 0.2806, 0.1434], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0114, 0.0119, 0.0122, 0.0113, 0.0095, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:25:10,838 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8295, 1.2528, 0.9750, 1.6332, 2.1271, 1.3896, 1.5504, 1.6387], device='cuda:6'), covar=tensor([0.1469, 0.2143, 0.1924, 0.1210, 0.1931, 0.1836, 0.1473, 0.2008], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0090, 0.0118, 0.0091, 0.0097, 0.0087], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:25:40,276 INFO [finetune.py:976] (6/7) Epoch 19, batch 4100, loss[loss=0.2046, simple_loss=0.2665, pruned_loss=0.07137, over 4899.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2522, pruned_loss=0.05554, over 955840.81 frames. ], batch size: 36, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:25:41,796 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107201.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 23:25:50,502 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107213.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:26:13,435 INFO [finetune.py:976] (6/7) Epoch 19, batch 4150, loss[loss=0.1738, simple_loss=0.2387, pruned_loss=0.05446, over 4795.00 frames. ], tot_loss[loss=0.1828, simple_loss=0.2533, pruned_loss=0.05617, over 955434.76 frames. ], batch size: 25, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:26:21,811 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.570e+02 1.970e+02 2.461e+02 5.293e+02, threshold=3.939e+02, percent-clipped=1.0 2023-03-26 23:26:24,769 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-26 23:26:42,546 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3760, 1.2160, 1.5457, 2.3869, 1.5008, 2.0132, 1.0078, 1.9879], device='cuda:6'), covar=tensor([0.1774, 0.1634, 0.1269, 0.0893, 0.1072, 0.1446, 0.1587, 0.0725], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0123, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:26:56,744 INFO [finetune.py:976] (6/7) Epoch 19, batch 4200, loss[loss=0.1563, simple_loss=0.2317, pruned_loss=0.04041, over 4804.00 frames. ], tot_loss[loss=0.1835, simple_loss=0.2538, pruned_loss=0.05654, over 956344.30 frames. ], batch size: 39, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:26:58,099 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0587, 1.8812, 1.6670, 1.8009, 1.7674, 1.8016, 1.8074, 2.5122], device='cuda:6'), covar=tensor([0.3885, 0.4166, 0.3213, 0.3745, 0.4129, 0.2437, 0.3717, 0.1763], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0232, 0.0278, 0.0254, 0.0224, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:27:10,900 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1885, 1.7469, 1.7432, 0.8269, 2.0714, 2.1698, 1.9606, 1.7059], device='cuda:6'), covar=tensor([0.0945, 0.0817, 0.0607, 0.0786, 0.0649, 0.0921, 0.0558, 0.0857], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0125, 0.0131, 0.0128, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.0725e-05, 1.0880e-04, 8.9111e-05, 8.8335e-05, 9.2027e-05, 9.1848e-05, 1.0137e-04, 1.0577e-04], device='cuda:6') 2023-03-26 23:27:21,561 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 23:27:29,944 INFO [finetune.py:976] (6/7) Epoch 19, batch 4250, loss[loss=0.1638, simple_loss=0.2255, pruned_loss=0.05104, over 4729.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.251, pruned_loss=0.05565, over 954809.56 frames. ], batch size: 54, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:27:33,463 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.503e+02 1.795e+02 2.146e+02 3.676e+02, threshold=3.590e+02, percent-clipped=0.0 2023-03-26 23:27:38,939 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2969, 2.9378, 3.0289, 3.2105, 3.1016, 2.9064, 3.3564, 0.9493], device='cuda:6'), covar=tensor([0.1053, 0.1066, 0.1069, 0.1170, 0.1665, 0.1763, 0.1077, 0.5680], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0242, 0.0277, 0.0290, 0.0331, 0.0281, 0.0300, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:27:51,769 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7477, 1.5961, 1.4520, 1.8202, 1.9731, 1.7786, 1.2521, 1.4513], device='cuda:6'), covar=tensor([0.2015, 0.1970, 0.1837, 0.1505, 0.1646, 0.1165, 0.2524, 0.1853], device='cuda:6'), in_proj_covar=tensor([0.0243, 0.0209, 0.0212, 0.0192, 0.0242, 0.0188, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:28:02,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3288, 2.9618, 2.8340, 1.3030, 3.0689, 2.2711, 1.0518, 2.0384], device='cuda:6'), covar=tensor([0.2276, 0.2056, 0.1781, 0.3690, 0.1533, 0.1179, 0.3788, 0.1712], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0159, 0.0128, 0.0159, 0.0122, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:28:03,393 INFO [finetune.py:976] (6/7) Epoch 19, batch 4300, loss[loss=0.1704, simple_loss=0.2491, pruned_loss=0.04588, over 4789.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.2498, pruned_loss=0.05566, over 955947.51 frames. ], batch size: 29, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:28:09,585 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3266, 2.3395, 1.8227, 2.6268, 2.3767, 2.0612, 2.8780, 2.3944], device='cuda:6'), covar=tensor([0.1350, 0.2267, 0.3219, 0.2441, 0.2585, 0.1673, 0.3415, 0.1842], device='cuda:6'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0253, 0.0247, 0.0202, 0.0215, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:28:15,970 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0168, 4.4889, 4.3324, 2.3474, 4.5025, 3.4010, 0.7210, 3.1842], device='cuda:6'), covar=tensor([0.2316, 0.2012, 0.1291, 0.3124, 0.0960, 0.0946, 0.5040, 0.1506], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0175, 0.0159, 0.0129, 0.0159, 0.0122, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:28:17,715 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8984, 1.8429, 1.9233, 1.1718, 1.9754, 1.9009, 1.8960, 1.5896], device='cuda:6'), covar=tensor([0.0515, 0.0591, 0.0585, 0.0894, 0.0723, 0.0685, 0.0592, 0.1152], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0135, 0.0140, 0.0121, 0.0125, 0.0139, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:28:24,668 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3790, 3.7938, 3.9990, 4.1777, 4.0990, 3.8942, 4.4506, 1.4550], device='cuda:6'), covar=tensor([0.0765, 0.0821, 0.0881, 0.1086, 0.1164, 0.1363, 0.0656, 0.5502], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0243, 0.0277, 0.0290, 0.0332, 0.0281, 0.0301, 0.0293], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:28:35,891 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.17 vs. limit=5.0 2023-03-26 23:28:36,197 INFO [finetune.py:976] (6/7) Epoch 19, batch 4350, loss[loss=0.1608, simple_loss=0.2337, pruned_loss=0.04395, over 4900.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2463, pruned_loss=0.05434, over 958330.76 frames. ], batch size: 43, lr: 3.28e-03, grad_scale: 32.0 2023-03-26 23:28:40,180 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.532e+02 1.813e+02 2.231e+02 3.395e+02, threshold=3.625e+02, percent-clipped=1.0 2023-03-26 23:29:10,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3355, 3.7624, 3.9609, 4.1643, 4.0528, 3.8526, 4.4271, 1.3767], device='cuda:6'), covar=tensor([0.0855, 0.1013, 0.1132, 0.1209, 0.1526, 0.1854, 0.0847, 0.6063], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0244, 0.0279, 0.0291, 0.0334, 0.0283, 0.0302, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:29:21,288 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107496.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 23:29:23,021 INFO [finetune.py:976] (6/7) Epoch 19, batch 4400, loss[loss=0.2152, simple_loss=0.2807, pruned_loss=0.0749, over 4854.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2472, pruned_loss=0.055, over 957231.56 frames. ], batch size: 49, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:29:29,600 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107508.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 23:29:31,356 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107510.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:29:56,828 INFO [finetune.py:976] (6/7) Epoch 19, batch 4450, loss[loss=0.1922, simple_loss=0.2685, pruned_loss=0.05792, over 4820.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2495, pruned_loss=0.05518, over 954736.27 frames. ], batch size: 38, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:29:59,905 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.983e+01 1.601e+02 1.972e+02 2.467e+02 3.942e+02, threshold=3.944e+02, percent-clipped=4.0 2023-03-26 23:30:12,267 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107571.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:30:42,902 INFO [finetune.py:976] (6/7) Epoch 19, batch 4500, loss[loss=0.1856, simple_loss=0.2632, pruned_loss=0.05403, over 4738.00 frames. ], tot_loss[loss=0.1817, simple_loss=0.2519, pruned_loss=0.05574, over 954346.95 frames. ], batch size: 27, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:30:54,481 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2948, 1.4604, 1.9934, 1.7340, 1.5178, 3.2854, 1.3185, 1.4453], device='cuda:6'), covar=tensor([0.1248, 0.2235, 0.1288, 0.1106, 0.1807, 0.0298, 0.1841, 0.2318], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:31:25,201 INFO [finetune.py:976] (6/7) Epoch 19, batch 4550, loss[loss=0.1719, simple_loss=0.231, pruned_loss=0.05641, over 4686.00 frames. ], tot_loss[loss=0.1826, simple_loss=0.2531, pruned_loss=0.05607, over 955088.93 frames. ], batch size: 23, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:31:28,199 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.552e+02 1.832e+02 2.186e+02 5.352e+02, threshold=3.664e+02, percent-clipped=1.0 2023-03-26 23:31:31,341 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107659.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:31:38,981 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-26 23:32:12,110 INFO [finetune.py:976] (6/7) Epoch 19, batch 4600, loss[loss=0.167, simple_loss=0.2447, pruned_loss=0.04466, over 4864.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2522, pruned_loss=0.05575, over 954944.62 frames. ], batch size: 34, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:32:26,324 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:32:37,571 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=107737.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:32:45,689 INFO [finetune.py:976] (6/7) Epoch 19, batch 4650, loss[loss=0.1696, simple_loss=0.2336, pruned_loss=0.05278, over 4773.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2488, pruned_loss=0.05482, over 956633.51 frames. ], batch size: 28, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:32:48,740 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.855e+01 1.504e+02 1.713e+02 2.086e+02 4.043e+02, threshold=3.426e+02, percent-clipped=2.0 2023-03-26 23:33:03,453 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-26 23:33:17,183 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107796.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:33:18,901 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=107798.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:33:19,351 INFO [finetune.py:976] (6/7) Epoch 19, batch 4700, loss[loss=0.162, simple_loss=0.2185, pruned_loss=0.05278, over 4218.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2448, pruned_loss=0.05301, over 955703.57 frames. ], batch size: 65, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:33:25,048 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=107808.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:33:30,474 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7363, 4.9011, 4.5703, 2.7086, 5.0527, 3.9234, 0.8670, 3.4738], device='cuda:6'), covar=tensor([0.2597, 0.1680, 0.1501, 0.3183, 0.0771, 0.0704, 0.5084, 0.1386], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0177, 0.0160, 0.0130, 0.0160, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:33:50,632 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107844.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:33:54,087 INFO [finetune.py:976] (6/7) Epoch 19, batch 4750, loss[loss=0.1634, simple_loss=0.2345, pruned_loss=0.04615, over 4833.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2423, pruned_loss=0.05205, over 956829.63 frames. ], batch size: 33, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:33:57,616 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.438e+02 1.688e+02 2.143e+02 3.806e+02, threshold=3.376e+02, percent-clipped=2.0 2023-03-26 23:33:58,882 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=107856.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:34:04,994 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=107866.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:34:37,227 INFO [finetune.py:976] (6/7) Epoch 19, batch 4800, loss[loss=0.1932, simple_loss=0.2613, pruned_loss=0.06251, over 4813.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2458, pruned_loss=0.05362, over 954708.63 frames. ], batch size: 38, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:35:10,747 INFO [finetune.py:976] (6/7) Epoch 19, batch 4850, loss[loss=0.1728, simple_loss=0.2527, pruned_loss=0.04646, over 4882.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2494, pruned_loss=0.05456, over 952856.45 frames. ], batch size: 32, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:35:13,744 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.610e+02 1.895e+02 2.225e+02 4.035e+02, threshold=3.790e+02, percent-clipped=2.0 2023-03-26 23:35:17,586 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-26 23:35:45,813 INFO [finetune.py:976] (6/7) Epoch 19, batch 4900, loss[loss=0.2126, simple_loss=0.2865, pruned_loss=0.0693, over 4812.00 frames. ], tot_loss[loss=0.1806, simple_loss=0.251, pruned_loss=0.05511, over 952043.97 frames. ], batch size: 41, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:35:50,101 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-26 23:35:57,388 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108015.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:36:28,166 INFO [finetune.py:976] (6/7) Epoch 19, batch 4950, loss[loss=0.1598, simple_loss=0.2272, pruned_loss=0.04626, over 4779.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2516, pruned_loss=0.05464, over 953756.99 frames. ], batch size: 26, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:36:31,628 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.136e+02 1.572e+02 1.807e+02 2.323e+02 4.539e+02, threshold=3.614e+02, percent-clipped=1.0 2023-03-26 23:37:04,011 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108093.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:37:10,955 INFO [finetune.py:976] (6/7) Epoch 19, batch 5000, loss[loss=0.1571, simple_loss=0.2254, pruned_loss=0.0444, over 4918.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.2501, pruned_loss=0.05434, over 952429.00 frames. ], batch size: 36, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:37:26,165 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2843, 2.9039, 2.7958, 1.2285, 3.0803, 2.3305, 0.8425, 1.8531], device='cuda:6'), covar=tensor([0.2301, 0.2283, 0.1728, 0.3461, 0.1322, 0.1160, 0.3748, 0.1688], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0159, 0.0129, 0.0159, 0.0122, 0.0146, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:37:48,285 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1266, 1.3402, 1.4001, 0.6409, 1.3162, 1.5825, 1.6468, 1.2671], device='cuda:6'), covar=tensor([0.1022, 0.0682, 0.0558, 0.0573, 0.0511, 0.0638, 0.0361, 0.0759], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0124, 0.0124, 0.0131, 0.0128, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.0350e-05, 1.0824e-04, 8.8721e-05, 8.8177e-05, 9.2070e-05, 9.1560e-05, 1.0127e-04, 1.0539e-04], device='cuda:6') 2023-03-26 23:37:54,039 INFO [finetune.py:976] (6/7) Epoch 19, batch 5050, loss[loss=0.1635, simple_loss=0.2373, pruned_loss=0.04484, over 4801.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2479, pruned_loss=0.05384, over 952659.42 frames. ], batch size: 25, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:37:57,572 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.150e+02 1.577e+02 1.863e+02 2.132e+02 3.762e+02, threshold=3.725e+02, percent-clipped=1.0 2023-03-26 23:38:05,844 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108166.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:38:14,306 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4528, 2.2530, 1.7624, 0.7789, 1.8332, 1.9569, 1.7973, 2.0435], device='cuda:6'), covar=tensor([0.0749, 0.0702, 0.1440, 0.1993, 0.1340, 0.2155, 0.1978, 0.0881], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0193, 0.0200, 0.0183, 0.0210, 0.0208, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:38:27,779 INFO [finetune.py:976] (6/7) Epoch 19, batch 5100, loss[loss=0.1375, simple_loss=0.2076, pruned_loss=0.03371, over 4828.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2448, pruned_loss=0.0528, over 955374.96 frames. ], batch size: 39, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:38:35,245 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5568, 1.4174, 1.5532, 0.8277, 1.7334, 1.9352, 1.8063, 1.4632], device='cuda:6'), covar=tensor([0.1057, 0.1046, 0.0549, 0.0618, 0.0494, 0.0604, 0.0453, 0.0829], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0124, 0.0124, 0.0131, 0.0128, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.0472e-05, 1.0830e-04, 8.8973e-05, 8.8240e-05, 9.2272e-05, 9.1670e-05, 1.0154e-04, 1.0547e-04], device='cuda:6') 2023-03-26 23:38:38,021 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=108214.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:38:39,908 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-26 23:38:47,296 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-26 23:39:00,759 INFO [finetune.py:976] (6/7) Epoch 19, batch 5150, loss[loss=0.197, simple_loss=0.2592, pruned_loss=0.06739, over 4930.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2467, pruned_loss=0.05422, over 955868.48 frames. ], batch size: 33, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:39:04,796 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.278e+01 1.451e+02 1.873e+02 2.231e+02 4.201e+02, threshold=3.747e+02, percent-clipped=0.0 2023-03-26 23:39:14,978 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1767, 2.1164, 2.7376, 1.5909, 2.4951, 2.4615, 1.9449, 2.8192], device='cuda:6'), covar=tensor([0.1439, 0.1982, 0.1636, 0.2346, 0.0888, 0.1695, 0.2572, 0.0783], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0205, 0.0191, 0.0189, 0.0174, 0.0213, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:39:23,397 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0881, 2.1064, 2.0384, 1.3743, 2.0750, 2.2277, 2.1579, 1.7674], device='cuda:6'), covar=tensor([0.0549, 0.0571, 0.0664, 0.0915, 0.0706, 0.0605, 0.0582, 0.1086], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0139, 0.0120, 0.0125, 0.0138, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:39:39,546 INFO [finetune.py:976] (6/7) Epoch 19, batch 5200, loss[loss=0.1914, simple_loss=0.2617, pruned_loss=0.06053, over 4826.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2501, pruned_loss=0.05523, over 957062.32 frames. ], batch size: 33, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:39:54,430 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108315.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:00,997 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108325.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:16,476 INFO [finetune.py:976] (6/7) Epoch 19, batch 5250, loss[loss=0.1868, simple_loss=0.2658, pruned_loss=0.0539, over 4823.00 frames. ], tot_loss[loss=0.1805, simple_loss=0.2511, pruned_loss=0.05497, over 956803.94 frames. ], batch size: 51, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:40:19,993 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.566e+02 1.984e+02 2.332e+02 4.295e+02, threshold=3.968e+02, percent-clipped=2.0 2023-03-26 23:40:26,551 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=108363.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:42,058 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108386.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:46,363 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108393.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:40:48,237 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0437, 1.9663, 1.6809, 1.9630, 1.8312, 1.8577, 1.8500, 2.5129], device='cuda:6'), covar=tensor([0.3598, 0.4313, 0.3361, 0.3986, 0.4382, 0.2401, 0.4251, 0.1812], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0260, 0.0230, 0.0275, 0.0251, 0.0221, 0.0253, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:40:49,958 INFO [finetune.py:976] (6/7) Epoch 19, batch 5300, loss[loss=0.1607, simple_loss=0.2339, pruned_loss=0.04374, over 4825.00 frames. ], tot_loss[loss=0.182, simple_loss=0.2528, pruned_loss=0.05561, over 955071.53 frames. ], batch size: 30, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:41:22,523 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108432.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:41:32,018 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=108441.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:41:36,857 INFO [finetune.py:976] (6/7) Epoch 19, batch 5350, loss[loss=0.1663, simple_loss=0.2367, pruned_loss=0.04792, over 4906.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2523, pruned_loss=0.05477, over 956667.06 frames. ], batch size: 37, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:41:39,875 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.546e+01 1.453e+02 1.815e+02 2.266e+02 3.194e+02, threshold=3.630e+02, percent-clipped=0.0 2023-03-26 23:41:52,734 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-26 23:42:09,116 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108493.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:42:11,904 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-26 23:42:12,600 INFO [finetune.py:976] (6/7) Epoch 19, batch 5400, loss[loss=0.1874, simple_loss=0.2593, pruned_loss=0.05778, over 4899.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2498, pruned_loss=0.05421, over 957602.50 frames. ], batch size: 36, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:42:21,044 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3045, 1.2832, 1.2504, 1.2972, 1.5499, 1.4364, 1.3485, 1.2234], device='cuda:6'), covar=tensor([0.0367, 0.0262, 0.0602, 0.0273, 0.0232, 0.0491, 0.0305, 0.0380], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0108, 0.0145, 0.0112, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.5654e-05, 8.3383e-05, 1.1437e-04, 8.5966e-05, 7.8004e-05, 8.2292e-05, 7.4395e-05, 8.5502e-05], device='cuda:6') 2023-03-26 23:42:56,628 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-26 23:42:58,627 INFO [finetune.py:976] (6/7) Epoch 19, batch 5450, loss[loss=0.1352, simple_loss=0.2138, pruned_loss=0.02834, over 4818.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2467, pruned_loss=0.05336, over 957115.53 frames. ], batch size: 38, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:43:01,647 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.000e+02 1.562e+02 1.816e+02 2.216e+02 4.232e+02, threshold=3.632e+02, percent-clipped=2.0 2023-03-26 23:43:17,257 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108577.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:43:31,854 INFO [finetune.py:976] (6/7) Epoch 19, batch 5500, loss[loss=0.197, simple_loss=0.2518, pruned_loss=0.07104, over 4847.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2436, pruned_loss=0.05238, over 956111.96 frames. ], batch size: 49, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:43:53,109 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-26 23:43:58,812 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108638.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:44:05,687 INFO [finetune.py:976] (6/7) Epoch 19, batch 5550, loss[loss=0.1465, simple_loss=0.2213, pruned_loss=0.03588, over 4760.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2448, pruned_loss=0.05263, over 953834.22 frames. ], batch size: 27, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:44:08,705 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.553e+02 1.822e+02 2.201e+02 3.552e+02, threshold=3.643e+02, percent-clipped=0.0 2023-03-26 23:44:09,434 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9828, 1.8660, 1.9148, 1.2504, 1.9525, 1.9594, 1.9894, 1.6120], device='cuda:6'), covar=tensor([0.0545, 0.0672, 0.0750, 0.0916, 0.0755, 0.0705, 0.0594, 0.1111], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0135, 0.0140, 0.0121, 0.0125, 0.0139, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:44:27,069 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108681.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:44:37,479 INFO [finetune.py:976] (6/7) Epoch 19, batch 5600, loss[loss=0.1278, simple_loss=0.1908, pruned_loss=0.03239, over 4265.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2486, pruned_loss=0.05398, over 952091.01 frames. ], batch size: 18, lr: 3.27e-03, grad_scale: 32.0 2023-03-26 23:44:40,550 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9998, 1.7311, 2.1034, 1.9794, 1.7574, 1.7705, 1.9599, 1.9572], device='cuda:6'), covar=tensor([0.4397, 0.4038, 0.3211, 0.4177, 0.5195, 0.4144, 0.5224, 0.3213], device='cuda:6'), in_proj_covar=tensor([0.0252, 0.0240, 0.0260, 0.0279, 0.0276, 0.0251, 0.0287, 0.0244], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:44:57,953 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6858, 1.2447, 0.8917, 1.4877, 1.9570, 1.4117, 1.5230, 1.5940], device='cuda:6'), covar=tensor([0.1617, 0.2318, 0.2110, 0.1335, 0.2126, 0.2096, 0.1474, 0.2008], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-26 23:45:04,921 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6017, 0.7138, 1.6523, 1.5839, 1.4824, 1.3936, 1.5129, 1.5276], device='cuda:6'), covar=tensor([0.3359, 0.3280, 0.2848, 0.3078, 0.3975, 0.3063, 0.3509, 0.2668], device='cuda:6'), in_proj_covar=tensor([0.0253, 0.0240, 0.0261, 0.0280, 0.0277, 0.0252, 0.0288, 0.0244], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:45:09,467 INFO [finetune.py:976] (6/7) Epoch 19, batch 5650, loss[loss=0.2271, simple_loss=0.2969, pruned_loss=0.07867, over 4904.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2523, pruned_loss=0.0551, over 952801.52 frames. ], batch size: 43, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:45:11,853 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6944, 1.4826, 1.8732, 1.3501, 1.7263, 1.8855, 1.4385, 2.0395], device='cuda:6'), covar=tensor([0.1115, 0.2078, 0.1313, 0.1591, 0.0906, 0.1314, 0.2823, 0.0845], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0204, 0.0191, 0.0188, 0.0174, 0.0213, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:45:12,318 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.584e+02 1.878e+02 2.184e+02 3.636e+02, threshold=3.756e+02, percent-clipped=0.0 2023-03-26 23:45:32,950 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108788.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 23:45:34,302 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-26 23:45:34,777 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=108791.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:45:39,454 INFO [finetune.py:976] (6/7) Epoch 19, batch 5700, loss[loss=0.1379, simple_loss=0.2006, pruned_loss=0.03765, over 4230.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2481, pruned_loss=0.05457, over 932390.47 frames. ], batch size: 18, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:45:43,909 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.12 vs. limit=2.0 2023-03-26 23:45:51,016 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.4622, 2.8724, 3.3866, 2.6090, 3.2027, 3.6745, 2.7163, 3.6155], device='cuda:6'), covar=tensor([0.0752, 0.1414, 0.1113, 0.1496, 0.0667, 0.0818, 0.1980, 0.0552], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0204, 0.0190, 0.0188, 0.0173, 0.0212, 0.0216, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:46:08,103 INFO [finetune.py:976] (6/7) Epoch 20, batch 0, loss[loss=0.1992, simple_loss=0.2642, pruned_loss=0.0671, over 4883.00 frames. ], tot_loss[loss=0.1992, simple_loss=0.2642, pruned_loss=0.0671, over 4883.00 frames. ], batch size: 35, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:46:08,103 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-26 23:46:16,087 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9496, 1.4725, 1.1398, 1.7566, 2.0561, 1.4237, 1.7447, 1.6908], device='cuda:6'), covar=tensor([0.1075, 0.1434, 0.1444, 0.0844, 0.1613, 0.1630, 0.0968, 0.1457], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:46:17,462 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2000, 1.9981, 1.8876, 1.8318, 1.9693, 2.0231, 2.0361, 2.6223], device='cuda:6'), covar=tensor([0.3548, 0.4781, 0.3297, 0.3779, 0.3872, 0.2464, 0.3754, 0.1743], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0230, 0.0275, 0.0251, 0.0221, 0.0252, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:46:24,542 INFO [finetune.py:1010] (6/7) Epoch 20, validation: loss=0.158, simple_loss=0.2276, pruned_loss=0.04423, over 2265189.00 frames. 2023-03-26 23:46:24,543 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-26 23:46:36,758 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7580, 1.2368, 0.8867, 1.5959, 2.0056, 1.4844, 1.5818, 1.6347], device='cuda:6'), covar=tensor([0.1456, 0.2046, 0.1901, 0.1131, 0.1939, 0.1865, 0.1314, 0.1872], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:46:57,155 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=108852.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:46:58,211 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.806e+01 1.424e+02 1.737e+02 2.098e+02 5.389e+02, threshold=3.475e+02, percent-clipped=2.0 2023-03-26 23:47:17,656 INFO [finetune.py:976] (6/7) Epoch 20, batch 50, loss[loss=0.1425, simple_loss=0.2234, pruned_loss=0.03076, over 4906.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2517, pruned_loss=0.05497, over 217813.72 frames. ], batch size: 37, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:47:46,750 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0925, 1.8803, 1.6567, 1.7359, 2.0131, 1.7609, 2.1934, 2.0416], device='cuda:6'), covar=tensor([0.1375, 0.2229, 0.3056, 0.2753, 0.2474, 0.1723, 0.3101, 0.1833], device='cuda:6'), in_proj_covar=tensor([0.0185, 0.0188, 0.0235, 0.0254, 0.0247, 0.0204, 0.0215, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:47:57,361 INFO [finetune.py:976] (6/7) Epoch 20, batch 100, loss[loss=0.1474, simple_loss=0.2137, pruned_loss=0.0406, over 4528.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2465, pruned_loss=0.05408, over 379072.27 frames. ], batch size: 19, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:48:06,346 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=108933.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:48:14,240 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8128, 1.8022, 1.5631, 1.8959, 2.2028, 1.8917, 1.4173, 1.5039], device='cuda:6'), covar=tensor([0.2155, 0.1914, 0.1874, 0.1692, 0.1640, 0.1165, 0.2416, 0.1906], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0212, 0.0194, 0.0244, 0.0188, 0.0217, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:48:23,149 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.362e+02 1.754e+02 2.070e+02 5.157e+02, threshold=3.508e+02, percent-clipped=1.0 2023-03-26 23:48:38,578 INFO [finetune.py:976] (6/7) Epoch 20, batch 150, loss[loss=0.2026, simple_loss=0.2646, pruned_loss=0.07033, over 4907.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2432, pruned_loss=0.05324, over 508123.05 frames. ], batch size: 32, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:48:41,560 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=108981.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:48:45,108 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8916, 4.5269, 4.2851, 2.3034, 4.5743, 3.4332, 0.7992, 3.2405], device='cuda:6'), covar=tensor([0.2547, 0.1644, 0.1363, 0.3151, 0.0830, 0.0871, 0.4565, 0.1262], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0158, 0.0128, 0.0158, 0.0121, 0.0145, 0.0121], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:49:02,210 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4544, 1.3362, 1.3501, 1.3325, 0.8444, 2.2976, 0.7392, 1.2280], device='cuda:6'), covar=tensor([0.3305, 0.2500, 0.2220, 0.2454, 0.1939, 0.0353, 0.2698, 0.1322], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0122, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:49:11,417 INFO [finetune.py:976] (6/7) Epoch 20, batch 200, loss[loss=0.2511, simple_loss=0.3117, pruned_loss=0.0952, over 4832.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2438, pruned_loss=0.05346, over 608667.47 frames. ], batch size: 40, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:49:12,004 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5952, 1.1468, 0.9378, 1.4923, 1.9241, 1.2879, 1.4452, 1.5032], device='cuda:6'), covar=tensor([0.1635, 0.2190, 0.1949, 0.1252, 0.2162, 0.2022, 0.1451, 0.2063], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:49:13,183 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109029.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:49:16,079 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8097, 1.2703, 0.8754, 1.6174, 2.0446, 1.3825, 1.6010, 1.5959], device='cuda:6'), covar=tensor([0.1530, 0.2151, 0.2049, 0.1216, 0.2003, 0.1917, 0.1392, 0.1985], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-26 23:49:19,011 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109037.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:49:29,133 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.519e+02 1.780e+02 2.129e+02 3.450e+02, threshold=3.561e+02, percent-clipped=0.0 2023-03-26 23:49:44,471 INFO [finetune.py:976] (6/7) Epoch 20, batch 250, loss[loss=0.1304, simple_loss=0.2057, pruned_loss=0.02753, over 4821.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2453, pruned_loss=0.0538, over 685488.19 frames. ], batch size: 25, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:49:52,157 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109088.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:49:58,703 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109098.0, num_to_drop=1, layers_to_drop={3} 2023-03-26 23:50:17,246 INFO [finetune.py:976] (6/7) Epoch 20, batch 300, loss[loss=0.2143, simple_loss=0.2812, pruned_loss=0.07366, over 4812.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2492, pruned_loss=0.05464, over 746577.59 frames. ], batch size: 38, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:50:23,615 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109136.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:50:31,195 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109147.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:50:35,399 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.518e+02 1.856e+02 2.256e+02 3.204e+02, threshold=3.712e+02, percent-clipped=0.0 2023-03-26 23:50:47,510 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8099, 1.6532, 1.5068, 1.8743, 2.1484, 1.8691, 1.3739, 1.4915], device='cuda:6'), covar=tensor([0.2231, 0.2078, 0.2024, 0.1742, 0.1525, 0.1208, 0.2544, 0.2015], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0211, 0.0194, 0.0244, 0.0188, 0.0216, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:50:50,169 INFO [finetune.py:976] (6/7) Epoch 20, batch 350, loss[loss=0.1999, simple_loss=0.2785, pruned_loss=0.06061, over 4914.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2516, pruned_loss=0.05463, over 793809.04 frames. ], batch size: 38, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:51:01,463 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109194.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:51:25,273 INFO [finetune.py:976] (6/7) Epoch 20, batch 400, loss[loss=0.1627, simple_loss=0.2438, pruned_loss=0.04077, over 4929.00 frames. ], tot_loss[loss=0.1831, simple_loss=0.2543, pruned_loss=0.05597, over 831519.98 frames. ], batch size: 41, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:51:34,318 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109233.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:03,603 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.047e+02 1.657e+02 1.900e+02 2.185e+02 4.941e+02, threshold=3.801e+02, percent-clipped=3.0 2023-03-26 23:52:03,767 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9340, 1.9263, 1.6695, 2.0995, 2.5436, 2.1407, 1.6489, 1.6275], device='cuda:6'), covar=tensor([0.2309, 0.1981, 0.2118, 0.1727, 0.1457, 0.1098, 0.2420, 0.1997], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0211, 0.0194, 0.0244, 0.0188, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:52:04,372 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109255.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:26,281 INFO [finetune.py:976] (6/7) Epoch 20, batch 450, loss[loss=0.1339, simple_loss=0.194, pruned_loss=0.03684, over 4112.00 frames. ], tot_loss[loss=0.1811, simple_loss=0.2519, pruned_loss=0.05508, over 857595.68 frames. ], batch size: 18, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:52:29,294 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109281.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:38,184 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109294.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:52:54,643 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109318.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:53:00,676 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-26 23:53:02,105 INFO [finetune.py:976] (6/7) Epoch 20, batch 500, loss[loss=0.2247, simple_loss=0.2668, pruned_loss=0.09133, over 4719.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2492, pruned_loss=0.05427, over 879495.42 frames. ], batch size: 54, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:53:34,823 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.492e+02 1.802e+02 2.178e+02 4.247e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-26 23:53:39,365 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109355.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:53:47,684 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6197, 1.6095, 1.6849, 0.9729, 1.8021, 1.9289, 1.8927, 1.5017], device='cuda:6'), covar=tensor([0.0955, 0.0644, 0.0503, 0.0541, 0.0417, 0.0673, 0.0358, 0.0726], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0124, 0.0124, 0.0130, 0.0128, 0.0142, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.0243e-05, 1.0815e-04, 8.8500e-05, 8.7808e-05, 9.1689e-05, 9.1520e-05, 1.0162e-04, 1.0523e-04], device='cuda:6') 2023-03-26 23:53:52,982 INFO [finetune.py:976] (6/7) Epoch 20, batch 550, loss[loss=0.2009, simple_loss=0.2619, pruned_loss=0.06995, over 4908.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.246, pruned_loss=0.05359, over 896862.94 frames. ], batch size: 35, lr: 3.26e-03, grad_scale: 64.0 2023-03-26 23:53:53,721 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9142, 2.1752, 1.7884, 1.8689, 2.5160, 2.5586, 2.1122, 2.1220], device='cuda:6'), covar=tensor([0.0413, 0.0333, 0.0586, 0.0327, 0.0294, 0.0550, 0.0355, 0.0367], device='cuda:6'), in_proj_covar=tensor([0.0096, 0.0106, 0.0143, 0.0110, 0.0099, 0.0110, 0.0099, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.4665e-05, 8.1787e-05, 1.1256e-04, 8.4643e-05, 7.6756e-05, 8.1074e-05, 7.3525e-05, 8.4114e-05], device='cuda:6') 2023-03-26 23:53:54,343 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109379.0, num_to_drop=1, layers_to_drop={1} 2023-03-26 23:54:04,160 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109393.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:54:26,235 INFO [finetune.py:976] (6/7) Epoch 20, batch 600, loss[loss=0.1899, simple_loss=0.2683, pruned_loss=0.05579, over 4829.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2462, pruned_loss=0.05363, over 910664.19 frames. ], batch size: 39, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:54:39,806 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109447.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:54:44,545 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.417e+01 1.548e+02 1.729e+02 2.159e+02 3.434e+02, threshold=3.458e+02, percent-clipped=0.0 2023-03-26 23:54:59,322 INFO [finetune.py:976] (6/7) Epoch 20, batch 650, loss[loss=0.2289, simple_loss=0.2966, pruned_loss=0.08059, over 4820.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2483, pruned_loss=0.05379, over 920840.70 frames. ], batch size: 40, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:55:02,494 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6165, 1.4686, 2.2325, 1.9399, 1.7517, 4.1062, 1.3969, 1.6055], device='cuda:6'), covar=tensor([0.0901, 0.1895, 0.1226, 0.0975, 0.1649, 0.0199, 0.1619, 0.1944], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0084, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-26 23:55:10,858 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109495.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:55:15,453 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0475, 2.4759, 2.4195, 1.0819, 2.6223, 2.0816, 1.8069, 2.2729], device='cuda:6'), covar=tensor([0.1115, 0.1153, 0.1936, 0.2410, 0.1878, 0.2303, 0.2646, 0.1295], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0194, 0.0202, 0.0184, 0.0212, 0.0210, 0.0225, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-26 23:55:22,286 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8911, 4.8498, 4.5040, 2.4831, 4.9346, 3.7108, 0.8850, 3.4466], device='cuda:6'), covar=tensor([0.2241, 0.1684, 0.1325, 0.3047, 0.0768, 0.0840, 0.4514, 0.1376], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0175, 0.0158, 0.0129, 0.0159, 0.0122, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:55:32,473 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7071, 3.9398, 3.7061, 1.7932, 4.1027, 3.0256, 1.1996, 2.8743], device='cuda:6'), covar=tensor([0.2271, 0.1950, 0.1474, 0.3722, 0.0946, 0.0988, 0.4195, 0.1604], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0174, 0.0158, 0.0129, 0.0158, 0.0122, 0.0145, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-26 23:55:33,024 INFO [finetune.py:976] (6/7) Epoch 20, batch 700, loss[loss=0.12, simple_loss=0.1946, pruned_loss=0.02265, over 4734.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2501, pruned_loss=0.05474, over 927002.29 frames. ], batch size: 23, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:55:47,501 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109550.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:55:51,348 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.095e+02 1.485e+02 1.783e+02 2.085e+02 4.380e+02, threshold=3.566e+02, percent-clipped=3.0 2023-03-26 23:56:06,062 INFO [finetune.py:976] (6/7) Epoch 20, batch 750, loss[loss=0.1457, simple_loss=0.2216, pruned_loss=0.0349, over 4827.00 frames. ], tot_loss[loss=0.18, simple_loss=0.2508, pruned_loss=0.05461, over 933735.84 frames. ], batch size: 30, lr: 3.26e-03, grad_scale: 32.0 2023-03-26 23:56:07,742 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109579.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:56:39,569 INFO [finetune.py:976] (6/7) Epoch 20, batch 800, loss[loss=0.1679, simple_loss=0.2357, pruned_loss=0.05011, over 4927.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2513, pruned_loss=0.05463, over 939426.58 frames. ], batch size: 33, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:56:42,619 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-26 23:56:50,326 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109640.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:56:56,825 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109650.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:56:59,763 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.018e+02 1.497e+02 1.774e+02 2.103e+02 3.199e+02, threshold=3.548e+02, percent-clipped=0.0 2023-03-26 23:57:03,312 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=109659.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:57:23,386 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109674.0, num_to_drop=1, layers_to_drop={2} 2023-03-26 23:57:25,125 INFO [finetune.py:976] (6/7) Epoch 20, batch 850, loss[loss=0.1638, simple_loss=0.2416, pruned_loss=0.04298, over 4833.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2479, pruned_loss=0.05346, over 943161.29 frames. ], batch size: 33, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:57:40,060 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109693.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:58:06,040 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=109720.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:58:10,729 INFO [finetune.py:976] (6/7) Epoch 20, batch 900, loss[loss=0.1408, simple_loss=0.2151, pruned_loss=0.03322, over 4808.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2462, pruned_loss=0.05352, over 944721.76 frames. ], batch size: 25, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:58:22,314 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109741.0, num_to_drop=1, layers_to_drop={0} 2023-03-26 23:58:40,321 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.530e+02 1.823e+02 2.181e+02 3.809e+02, threshold=3.647e+02, percent-clipped=1.0 2023-03-26 23:59:03,843 INFO [finetune.py:976] (6/7) Epoch 20, batch 950, loss[loss=0.223, simple_loss=0.2851, pruned_loss=0.08041, over 4826.00 frames. ], tot_loss[loss=0.177, simple_loss=0.246, pruned_loss=0.05395, over 948087.87 frames. ], batch size: 39, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:59:36,850 INFO [finetune.py:976] (6/7) Epoch 20, batch 1000, loss[loss=0.1989, simple_loss=0.262, pruned_loss=0.06784, over 4833.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2466, pruned_loss=0.05413, over 948276.87 frames. ], batch size: 30, lr: 3.25e-03, grad_scale: 32.0 2023-03-26 23:59:39,007 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-26 23:59:52,407 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109850.0, num_to_drop=0, layers_to_drop=set() 2023-03-26 23:59:55,343 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.630e+02 1.951e+02 2.313e+02 5.473e+02, threshold=3.903e+02, percent-clipped=2.0 2023-03-27 00:00:00,723 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9138, 4.0924, 3.8244, 1.8531, 4.1450, 3.1232, 0.8839, 2.8238], device='cuda:6'), covar=tensor([0.2029, 0.1964, 0.1484, 0.3474, 0.0929, 0.0913, 0.4560, 0.1510], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0131, 0.0161, 0.0123, 0.0147, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 00:00:08,423 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6978, 1.5750, 1.5418, 1.6695, 1.4139, 3.7631, 1.5428, 1.9273], device='cuda:6'), covar=tensor([0.3274, 0.2440, 0.2184, 0.2309, 0.1618, 0.0166, 0.2538, 0.1267], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0122, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:00:10,590 INFO [finetune.py:976] (6/7) Epoch 20, batch 1050, loss[loss=0.1667, simple_loss=0.2343, pruned_loss=0.04952, over 4908.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2499, pruned_loss=0.05482, over 950243.43 frames. ], batch size: 32, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:00:10,800 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 00:00:24,789 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109898.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:00:43,712 INFO [finetune.py:976] (6/7) Epoch 20, batch 1100, loss[loss=0.179, simple_loss=0.2528, pruned_loss=0.05263, over 4903.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2513, pruned_loss=0.05521, over 951666.09 frames. ], batch size: 35, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:00:49,670 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=109935.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:00:59,781 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109950.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:01,370 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 00:01:02,703 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.532e+02 1.890e+02 2.271e+02 3.423e+02, threshold=3.780e+02, percent-clipped=0.0 2023-03-27 00:01:13,355 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.43 vs. limit=2.0 2023-03-27 00:01:14,940 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6499, 0.6898, 1.7058, 1.6334, 1.5188, 1.4791, 1.5457, 1.6033], device='cuda:6'), covar=tensor([0.3513, 0.3688, 0.3080, 0.3382, 0.4291, 0.3116, 0.3892, 0.2879], device='cuda:6'), in_proj_covar=tensor([0.0253, 0.0241, 0.0262, 0.0281, 0.0279, 0.0254, 0.0290, 0.0244], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:01:15,528 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=109974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:17,224 INFO [finetune.py:976] (6/7) Epoch 20, batch 1150, loss[loss=0.2016, simple_loss=0.2661, pruned_loss=0.06861, over 4887.00 frames. ], tot_loss[loss=0.1818, simple_loss=0.2526, pruned_loss=0.05546, over 950301.98 frames. ], batch size: 32, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:01:31,932 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=109998.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:44,413 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110015.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:45,674 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110017.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:48,681 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110022.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:01:49,000 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-27 00:01:52,170 INFO [finetune.py:976] (6/7) Epoch 20, batch 1200, loss[loss=0.157, simple_loss=0.2368, pruned_loss=0.03855, over 4754.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2498, pruned_loss=0.05425, over 949303.58 frames. ], batch size: 27, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:02:04,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110044.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:02:08,830 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 00:02:11,656 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.682e+01 1.527e+02 1.776e+02 2.166e+02 4.163e+02, threshold=3.552e+02, percent-clipped=2.0 2023-03-27 00:02:32,491 INFO [finetune.py:976] (6/7) Epoch 20, batch 1250, loss[loss=0.1995, simple_loss=0.2607, pruned_loss=0.06922, over 4776.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2474, pruned_loss=0.05355, over 950051.76 frames. ], batch size: 26, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:02:33,241 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110078.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:02,207 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:09,742 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.67 vs. limit=5.0 2023-03-27 00:03:09,982 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110110.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:13,063 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110115.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:03:23,865 INFO [finetune.py:976] (6/7) Epoch 20, batch 1300, loss[loss=0.1823, simple_loss=0.2442, pruned_loss=0.06019, over 4934.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2439, pruned_loss=0.05222, over 951452.21 frames. ], batch size: 38, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:03:25,168 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-27 00:03:33,293 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1531, 2.1331, 2.2098, 1.7064, 2.1589, 2.3159, 2.3209, 1.7977], device='cuda:6'), covar=tensor([0.0481, 0.0448, 0.0599, 0.0772, 0.0786, 0.0496, 0.0421, 0.0900], device='cuda:6'), in_proj_covar=tensor([0.0134, 0.0135, 0.0141, 0.0121, 0.0125, 0.0140, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:03:45,427 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.532e+02 1.884e+02 2.318e+02 4.682e+02, threshold=3.767e+02, percent-clipped=2.0 2023-03-27 00:04:01,786 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7430, 1.5493, 1.5856, 1.6372, 1.1990, 3.7209, 1.4535, 1.8739], device='cuda:6'), covar=tensor([0.3370, 0.2629, 0.2205, 0.2424, 0.1747, 0.0213, 0.2576, 0.1341], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0115, 0.0119, 0.0122, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:04:04,758 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:04:12,712 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110176.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:04:13,180 INFO [finetune.py:976] (6/7) Epoch 20, batch 1350, loss[loss=0.1894, simple_loss=0.2559, pruned_loss=0.06142, over 4741.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2439, pruned_loss=0.05265, over 952276.71 frames. ], batch size: 27, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:05:04,716 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1326, 2.8146, 2.4349, 1.3914, 2.6730, 2.1556, 2.0493, 2.4936], device='cuda:6'), covar=tensor([0.1050, 0.0845, 0.1993, 0.2204, 0.1648, 0.2191, 0.2264, 0.1175], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0193, 0.0200, 0.0182, 0.0210, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:05:15,849 INFO [finetune.py:976] (6/7) Epoch 20, batch 1400, loss[loss=0.1867, simple_loss=0.2524, pruned_loss=0.06045, over 4804.00 frames. ], tot_loss[loss=0.1791, simple_loss=0.2486, pruned_loss=0.05482, over 952572.22 frames. ], batch size: 25, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:05:26,734 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110235.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:05:48,219 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.545e+02 1.824e+02 2.176e+02 3.637e+02, threshold=3.648e+02, percent-clipped=0.0 2023-03-27 00:05:56,193 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-27 00:06:02,045 INFO [finetune.py:976] (6/7) Epoch 20, batch 1450, loss[loss=0.1443, simple_loss=0.2189, pruned_loss=0.03483, over 4749.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2489, pruned_loss=0.05381, over 953000.92 frames. ], batch size: 27, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:06:06,234 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110283.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:06,299 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110283.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:10,857 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.0909, 4.3921, 4.6898, 4.9104, 4.7967, 4.5925, 5.2205, 1.6626], device='cuda:6'), covar=tensor([0.0687, 0.0926, 0.0740, 0.0986, 0.1229, 0.1348, 0.0499, 0.5548], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0245, 0.0280, 0.0292, 0.0333, 0.0283, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:06:20,384 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110303.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:28,604 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110315.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:30,453 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110318.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:35,827 INFO [finetune.py:976] (6/7) Epoch 20, batch 1500, loss[loss=0.1627, simple_loss=0.2373, pruned_loss=0.04403, over 4758.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2504, pruned_loss=0.05412, over 955287.02 frames. ], batch size: 28, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:06:47,649 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110344.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:06:54,082 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.19 vs. limit=5.0 2023-03-27 00:06:55,109 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.590e+02 1.939e+02 2.244e+02 3.777e+02, threshold=3.878e+02, percent-clipped=2.0 2023-03-27 00:07:00,485 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110363.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:01,159 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110364.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:07,073 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110373.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:09,448 INFO [finetune.py:976] (6/7) Epoch 20, batch 1550, loss[loss=0.2014, simple_loss=0.2664, pruned_loss=0.06814, over 4865.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2512, pruned_loss=0.05413, over 956459.15 frames. ], batch size: 31, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:07:10,799 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110379.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:21,943 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1439, 1.7360, 2.4643, 1.4738, 2.0704, 2.3452, 1.6700, 2.4200], device='cuda:6'), covar=tensor([0.1258, 0.1900, 0.1381, 0.2108, 0.1000, 0.1477, 0.2554, 0.0832], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0206, 0.0192, 0.0191, 0.0176, 0.0215, 0.0220, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:07:25,522 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110400.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:07:38,441 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6934, 1.5640, 1.0899, 0.2623, 1.2310, 1.4894, 1.4812, 1.5115], device='cuda:6'), covar=tensor([0.0933, 0.0821, 0.1340, 0.2007, 0.1469, 0.2423, 0.2454, 0.0962], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0191, 0.0199, 0.0181, 0.0208, 0.0206, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:07:42,123 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 00:07:43,129 INFO [finetune.py:976] (6/7) Epoch 20, batch 1600, loss[loss=0.1743, simple_loss=0.2539, pruned_loss=0.04742, over 4753.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.249, pruned_loss=0.05367, over 957370.27 frames. ], batch size: 26, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:08:13,515 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.758e+01 1.502e+02 1.787e+02 2.306e+02 4.709e+02, threshold=3.574e+02, percent-clipped=2.0 2023-03-27 00:08:29,713 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110466.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:08:33,170 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:08:40,392 INFO [finetune.py:976] (6/7) Epoch 20, batch 1650, loss[loss=0.1551, simple_loss=0.219, pruned_loss=0.0456, over 4826.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2466, pruned_loss=0.05332, over 957801.90 frames. ], batch size: 51, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:08:41,704 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0374, 1.9911, 2.0895, 1.3837, 2.0294, 2.1292, 2.1669, 1.7214], device='cuda:6'), covar=tensor([0.0525, 0.0555, 0.0608, 0.0847, 0.0673, 0.0613, 0.0534, 0.1018], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0134, 0.0139, 0.0119, 0.0124, 0.0138, 0.0139, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:08:44,154 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3022, 1.2414, 1.1905, 1.2583, 1.5672, 1.4808, 1.3198, 1.2272], device='cuda:6'), covar=tensor([0.0352, 0.0290, 0.0612, 0.0275, 0.0207, 0.0359, 0.0326, 0.0347], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0107, 0.0144, 0.0111, 0.0100, 0.0111, 0.0100, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.5280e-05, 8.2742e-05, 1.1370e-04, 8.5628e-05, 7.7928e-05, 8.2295e-05, 7.4125e-05, 8.4995e-05], device='cuda:6') 2023-03-27 00:09:19,429 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3596, 1.4755, 1.7218, 1.4595, 1.6768, 3.0179, 1.5098, 1.5965], device='cuda:6'), covar=tensor([0.0953, 0.1789, 0.0939, 0.1009, 0.1451, 0.0248, 0.1405, 0.1684], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:09:24,056 INFO [finetune.py:976] (6/7) Epoch 20, batch 1700, loss[loss=0.1937, simple_loss=0.2833, pruned_loss=0.05201, over 4804.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2441, pruned_loss=0.05264, over 955340.59 frames. ], batch size: 41, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:09:37,995 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4313, 2.1982, 2.0231, 2.2937, 2.1030, 2.1755, 2.1091, 2.8982], device='cuda:6'), covar=tensor([0.3868, 0.4429, 0.3231, 0.3756, 0.3813, 0.2527, 0.3992, 0.1614], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0261, 0.0231, 0.0275, 0.0251, 0.0221, 0.0252, 0.0232], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:09:42,040 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5425, 1.4020, 1.2155, 1.4949, 1.7739, 1.6531, 1.5265, 1.3277], device='cuda:6'), covar=tensor([0.0341, 0.0317, 0.0654, 0.0312, 0.0235, 0.0555, 0.0319, 0.0411], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0107, 0.0144, 0.0111, 0.0100, 0.0111, 0.0100, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.5184e-05, 8.2552e-05, 1.1353e-04, 8.5551e-05, 7.7848e-05, 8.2262e-05, 7.4117e-05, 8.4887e-05], device='cuda:6') 2023-03-27 00:09:42,518 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.500e+02 1.773e+02 2.246e+02 3.830e+02, threshold=3.546e+02, percent-clipped=2.0 2023-03-27 00:09:57,624 INFO [finetune.py:976] (6/7) Epoch 20, batch 1750, loss[loss=0.1895, simple_loss=0.2666, pruned_loss=0.05615, over 4816.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2456, pruned_loss=0.05355, over 955753.87 frames. ], batch size: 40, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:10:10,418 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6173, 1.3620, 2.0176, 1.2516, 1.7247, 1.7949, 1.2219, 1.9760], device='cuda:6'), covar=tensor([0.1628, 0.2528, 0.1358, 0.2151, 0.1323, 0.1854, 0.3396, 0.1221], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0204, 0.0190, 0.0189, 0.0173, 0.0213, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:10:20,421 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110611.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:10:30,986 INFO [finetune.py:976] (6/7) Epoch 20, batch 1800, loss[loss=0.1997, simple_loss=0.2813, pruned_loss=0.05907, over 4896.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2495, pruned_loss=0.0542, over 956747.93 frames. ], batch size: 37, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:10:38,897 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110639.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:10:51,843 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 00:10:53,424 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3913, 3.8050, 4.0803, 4.1648, 4.1753, 3.9179, 4.4512, 1.8851], device='cuda:6'), covar=tensor([0.0753, 0.0779, 0.0666, 0.0876, 0.1147, 0.1302, 0.0682, 0.4944], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0246, 0.0278, 0.0293, 0.0332, 0.0284, 0.0303, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:10:55,666 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.630e+02 1.887e+02 2.220e+02 3.285e+02, threshold=3.774e+02, percent-clipped=0.0 2023-03-27 00:10:55,850 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0853, 1.5116, 2.0965, 2.0114, 1.9235, 1.8496, 1.9781, 1.9866], device='cuda:6'), covar=tensor([0.3692, 0.3760, 0.3288, 0.3606, 0.4403, 0.3600, 0.4279, 0.2951], device='cuda:6'), in_proj_covar=tensor([0.0252, 0.0240, 0.0262, 0.0281, 0.0278, 0.0253, 0.0289, 0.0243], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:11:02,082 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110659.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:03,349 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3717, 1.4843, 1.5624, 1.0146, 1.5508, 1.8057, 1.7993, 1.3830], device='cuda:6'), covar=tensor([0.0865, 0.0574, 0.0503, 0.0480, 0.0421, 0.0521, 0.0288, 0.0618], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0125, 0.0131, 0.0129, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.0930e-05, 1.0884e-04, 8.9388e-05, 8.8388e-05, 9.2352e-05, 9.2119e-05, 1.0179e-04, 1.0589e-04], device='cuda:6') 2023-03-27 00:11:11,045 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110672.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:11,632 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110673.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:12,218 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110674.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:14,002 INFO [finetune.py:976] (6/7) Epoch 20, batch 1850, loss[loss=0.156, simple_loss=0.2295, pruned_loss=0.04122, over 4808.00 frames. ], tot_loss[loss=0.178, simple_loss=0.249, pruned_loss=0.0535, over 955459.76 frames. ], batch size: 40, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:11:26,303 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 00:11:29,602 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110700.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:40,782 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8829, 3.3907, 3.6440, 3.7912, 3.6524, 3.3419, 3.9346, 1.1968], device='cuda:6'), covar=tensor([0.0899, 0.0903, 0.1005, 0.1134, 0.1483, 0.1839, 0.0937, 0.5947], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0247, 0.0280, 0.0294, 0.0334, 0.0285, 0.0304, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:11:44,155 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110721.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:11:47,773 INFO [finetune.py:976] (6/7) Epoch 20, batch 1900, loss[loss=0.1466, simple_loss=0.2311, pruned_loss=0.03109, over 4850.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2506, pruned_loss=0.05355, over 953874.96 frames. ], batch size: 49, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:11:54,339 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110736.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:01,620 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110748.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:06,290 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.472e+02 1.826e+02 2.198e+02 3.929e+02, threshold=3.651e+02, percent-clipped=1.0 2023-03-27 00:12:14,072 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110766.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:16,309 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7045, 4.0800, 4.2898, 4.4712, 4.4691, 4.2271, 4.7838, 1.6711], device='cuda:6'), covar=tensor([0.0734, 0.0878, 0.0778, 0.0884, 0.1278, 0.1486, 0.0643, 0.5558], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0245, 0.0278, 0.0292, 0.0333, 0.0283, 0.0303, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:12:17,593 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110771.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:19,952 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4436, 2.2456, 2.0286, 2.3049, 2.1163, 2.1728, 2.2181, 2.8173], device='cuda:6'), covar=tensor([0.3510, 0.4407, 0.3209, 0.3611, 0.3805, 0.2448, 0.3658, 0.1755], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0231, 0.0276, 0.0252, 0.0221, 0.0253, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:12:21,624 INFO [finetune.py:976] (6/7) Epoch 20, batch 1950, loss[loss=0.1381, simple_loss=0.208, pruned_loss=0.03409, over 4812.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2487, pruned_loss=0.0526, over 954470.33 frames. ], batch size: 25, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:12:22,324 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3709, 1.5367, 0.9026, 2.0245, 2.6761, 1.8363, 1.8751, 2.0434], device='cuda:6'), covar=tensor([0.1455, 0.2125, 0.2181, 0.1242, 0.1795, 0.2035, 0.1601, 0.2053], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:12:34,809 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110797.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:12:46,182 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110814.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:49,734 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110819.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:12:55,565 INFO [finetune.py:976] (6/7) Epoch 20, batch 2000, loss[loss=0.1565, simple_loss=0.226, pruned_loss=0.04349, over 4903.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2467, pruned_loss=0.05212, over 955531.79 frames. ], batch size: 36, lr: 3.25e-03, grad_scale: 32.0 2023-03-27 00:13:17,583 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.482e+02 1.740e+02 2.016e+02 2.901e+02, threshold=3.480e+02, percent-clipped=0.0 2023-03-27 00:13:30,915 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110868.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:13:38,672 INFO [finetune.py:976] (6/7) Epoch 20, batch 2050, loss[loss=0.1897, simple_loss=0.2533, pruned_loss=0.06304, over 4853.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2455, pruned_loss=0.0522, over 954366.64 frames. ], batch size: 47, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:14:28,242 INFO [finetune.py:976] (6/7) Epoch 20, batch 2100, loss[loss=0.1655, simple_loss=0.2356, pruned_loss=0.04773, over 4898.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2456, pruned_loss=0.05245, over 955055.58 frames. ], batch size: 32, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:14:29,623 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=110929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:39,985 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110939.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:50,629 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.458e+01 1.520e+02 1.862e+02 2.217e+02 3.516e+02, threshold=3.725e+02, percent-clipped=1.0 2023-03-27 00:14:52,595 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=110958.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:53,171 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110959.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:14:58,413 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=110967.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:03,725 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=110974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:05,918 INFO [finetune.py:976] (6/7) Epoch 20, batch 2150, loss[loss=0.1513, simple_loss=0.2114, pruned_loss=0.04556, over 4182.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2488, pruned_loss=0.05384, over 955747.30 frames. ], batch size: 18, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:15:12,589 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=110987.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:25,803 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111007.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:33,630 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111019.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:35,880 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111022.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:15:38,044 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.80 vs. limit=5.0 2023-03-27 00:15:38,858 INFO [finetune.py:976] (6/7) Epoch 20, batch 2200, loss[loss=0.1748, simple_loss=0.2401, pruned_loss=0.05481, over 4888.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2514, pruned_loss=0.05472, over 957109.15 frames. ], batch size: 35, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:15:50,043 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4363, 2.3274, 1.7668, 0.8560, 1.9121, 1.8638, 1.8263, 2.0765], device='cuda:6'), covar=tensor([0.0995, 0.0752, 0.1822, 0.2193, 0.1480, 0.2440, 0.2128, 0.0978], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0192, 0.0201, 0.0183, 0.0211, 0.0207, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:16:00,234 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.523e+02 1.854e+02 2.195e+02 4.707e+02, threshold=3.708e+02, percent-clipped=2.0 2023-03-27 00:16:10,017 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7825, 1.7665, 1.3905, 1.8035, 2.1715, 1.8794, 1.5544, 1.3888], device='cuda:6'), covar=tensor([0.2043, 0.1757, 0.1931, 0.1594, 0.1738, 0.1181, 0.2292, 0.1971], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0243, 0.0188, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:16:23,021 INFO [finetune.py:976] (6/7) Epoch 20, batch 2250, loss[loss=0.1663, simple_loss=0.2469, pruned_loss=0.04284, over 4904.00 frames. ], tot_loss[loss=0.1814, simple_loss=0.2523, pruned_loss=0.05522, over 957679.57 frames. ], batch size: 46, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:16:27,090 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0405, 1.4553, 0.7283, 1.8476, 2.3593, 1.6481, 1.6262, 1.8622], device='cuda:6'), covar=tensor([0.1417, 0.1968, 0.2178, 0.1182, 0.1886, 0.1915, 0.1491, 0.1944], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:16:33,699 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111092.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:16:56,460 INFO [finetune.py:976] (6/7) Epoch 20, batch 2300, loss[loss=0.1694, simple_loss=0.2514, pruned_loss=0.04372, over 4857.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2519, pruned_loss=0.05431, over 955241.31 frames. ], batch size: 31, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:17:15,915 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.510e+02 1.791e+02 2.077e+02 4.254e+02, threshold=3.582e+02, percent-clipped=2.0 2023-03-27 00:17:21,451 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-27 00:17:30,233 INFO [finetune.py:976] (6/7) Epoch 20, batch 2350, loss[loss=0.1518, simple_loss=0.2358, pruned_loss=0.03385, over 4754.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2486, pruned_loss=0.05357, over 953722.93 frames. ], batch size: 26, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:17:30,938 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5198, 1.1095, 0.7453, 1.3590, 1.9294, 0.8150, 1.2591, 1.4576], device='cuda:6'), covar=tensor([0.1447, 0.2058, 0.1737, 0.1203, 0.1883, 0.1903, 0.1483, 0.1894], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0091, 0.0120, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:18:01,643 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111224.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:18:02,316 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=111225.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:18:03,426 INFO [finetune.py:976] (6/7) Epoch 20, batch 2400, loss[loss=0.1835, simple_loss=0.2514, pruned_loss=0.05785, over 4834.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.247, pruned_loss=0.05356, over 954746.28 frames. ], batch size: 39, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:18:22,738 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.826e+01 1.548e+02 1.775e+02 2.063e+02 3.363e+02, threshold=3.550e+02, percent-clipped=0.0 2023-03-27 00:18:30,979 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111267.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:18:36,922 INFO [finetune.py:976] (6/7) Epoch 20, batch 2450, loss[loss=0.1969, simple_loss=0.2707, pruned_loss=0.06154, over 4916.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2453, pruned_loss=0.05339, over 954620.24 frames. ], batch size: 36, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:18:50,053 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:19:21,771 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111314.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:19:22,366 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111315.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:19:33,324 INFO [finetune.py:976] (6/7) Epoch 20, batch 2500, loss[loss=0.1748, simple_loss=0.2477, pruned_loss=0.05096, over 4830.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2455, pruned_loss=0.05329, over 955555.53 frames. ], batch size: 30, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:19:39,198 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-27 00:19:39,521 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8107, 1.0931, 1.8627, 1.8013, 1.6359, 1.5814, 1.6676, 1.7136], device='cuda:6'), covar=tensor([0.3625, 0.3911, 0.3153, 0.3340, 0.4386, 0.3563, 0.4137, 0.2873], device='cuda:6'), in_proj_covar=tensor([0.0254, 0.0243, 0.0264, 0.0283, 0.0281, 0.0255, 0.0291, 0.0245], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:19:47,676 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-27 00:20:03,165 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.591e+02 1.841e+02 2.119e+02 4.112e+02, threshold=3.683e+02, percent-clipped=1.0 2023-03-27 00:20:17,413 INFO [finetune.py:976] (6/7) Epoch 20, batch 2550, loss[loss=0.1946, simple_loss=0.282, pruned_loss=0.05357, over 4917.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2493, pruned_loss=0.05397, over 956104.14 frames. ], batch size: 42, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:20:27,522 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111392.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:20:51,266 INFO [finetune.py:976] (6/7) Epoch 20, batch 2600, loss[loss=0.1531, simple_loss=0.2329, pruned_loss=0.03662, over 4749.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2493, pruned_loss=0.05328, over 956265.49 frames. ], batch size: 27, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:20:59,728 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111440.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:21:10,162 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.233e+02 1.591e+02 1.878e+02 2.220e+02 5.233e+02, threshold=3.757e+02, percent-clipped=1.0 2023-03-27 00:21:20,909 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=111468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:21:31,397 INFO [finetune.py:976] (6/7) Epoch 20, batch 2650, loss[loss=0.1545, simple_loss=0.2335, pruned_loss=0.03777, over 4802.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.25, pruned_loss=0.05289, over 958429.67 frames. ], batch size: 51, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:21:48,632 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8523, 1.7575, 1.7471, 1.2795, 1.9070, 1.9098, 1.8820, 1.5648], device='cuda:6'), covar=tensor([0.0585, 0.0674, 0.0714, 0.0823, 0.0748, 0.0693, 0.0597, 0.1132], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0125, 0.0140, 0.0141, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:22:01,191 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.69 vs. limit=5.0 2023-03-27 00:22:02,666 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4910, 2.2782, 2.4896, 1.1268, 2.8473, 2.7093, 2.6301, 2.0962], device='cuda:6'), covar=tensor([0.0768, 0.0660, 0.0340, 0.0568, 0.0312, 0.0680, 0.0330, 0.0643], device='cuda:6'), in_proj_covar=tensor([0.0125, 0.0151, 0.0125, 0.0125, 0.0131, 0.0130, 0.0142, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.1388e-05, 1.0958e-04, 8.9587e-05, 8.8504e-05, 9.2186e-05, 9.3065e-05, 1.0182e-04, 1.0663e-04], device='cuda:6') 2023-03-27 00:22:06,260 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111524.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:07,975 INFO [finetune.py:976] (6/7) Epoch 20, batch 2700, loss[loss=0.1732, simple_loss=0.2345, pruned_loss=0.05592, over 4885.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2493, pruned_loss=0.05244, over 956956.86 frames. ], batch size: 32, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:22:10,273 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111529.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:19,190 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7522, 1.6397, 1.4647, 1.5926, 2.0317, 2.0033, 1.6426, 1.5127], device='cuda:6'), covar=tensor([0.0311, 0.0336, 0.0588, 0.0324, 0.0218, 0.0395, 0.0378, 0.0439], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0107, 0.0145, 0.0111, 0.0100, 0.0111, 0.0100, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.5230e-05, 8.2648e-05, 1.1395e-04, 8.5428e-05, 7.7899e-05, 8.2105e-05, 7.4476e-05, 8.5633e-05], device='cuda:6') 2023-03-27 00:22:27,298 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.466e+01 1.581e+02 1.832e+02 2.307e+02 3.346e+02, threshold=3.664e+02, percent-clipped=0.0 2023-03-27 00:22:38,119 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:41,150 INFO [finetune.py:976] (6/7) Epoch 20, batch 2750, loss[loss=0.1785, simple_loss=0.2494, pruned_loss=0.05376, over 4926.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2473, pruned_loss=0.05248, over 956875.59 frames. ], batch size: 33, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:22:44,079 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111581.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:22:49,575 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4008, 2.1606, 1.5948, 0.7622, 1.8193, 1.8896, 1.7423, 1.9105], device='cuda:6'), covar=tensor([0.0854, 0.0804, 0.1683, 0.2004, 0.1542, 0.2116, 0.2227, 0.1001], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0200, 0.0182, 0.0211, 0.0206, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:23:06,513 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111614.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:23:07,075 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9463, 4.7758, 4.5100, 2.7195, 4.8457, 3.7369, 1.0485, 3.4860], device='cuda:6'), covar=tensor([0.2211, 0.1907, 0.1273, 0.2971, 0.0737, 0.0805, 0.4457, 0.1341], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0179, 0.0162, 0.0132, 0.0163, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 00:23:14,366 INFO [finetune.py:976] (6/7) Epoch 20, batch 2800, loss[loss=0.1256, simple_loss=0.1997, pruned_loss=0.02581, over 4755.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2441, pruned_loss=0.05108, over 959298.74 frames. ], batch size: 26, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:23:19,796 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0398, 1.3598, 0.7456, 1.9494, 2.4577, 1.7198, 1.6180, 1.8624], device='cuda:6'), covar=tensor([0.1972, 0.2824, 0.2684, 0.1553, 0.2062, 0.2517, 0.2119, 0.2826], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:23:31,141 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 00:23:32,758 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.029e+01 1.565e+02 1.748e+02 2.177e+02 3.583e+02, threshold=3.496e+02, percent-clipped=0.0 2023-03-27 00:23:38,056 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111662.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:23:48,042 INFO [finetune.py:976] (6/7) Epoch 20, batch 2850, loss[loss=0.2078, simple_loss=0.2774, pruned_loss=0.06914, over 4869.00 frames. ], tot_loss[loss=0.174, simple_loss=0.244, pruned_loss=0.05198, over 957152.74 frames. ], batch size: 34, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:23:49,301 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6243, 3.5546, 3.4431, 1.6474, 3.6963, 2.8406, 0.9094, 2.4834], device='cuda:6'), covar=tensor([0.3300, 0.2134, 0.1420, 0.3580, 0.1103, 0.0970, 0.4300, 0.1604], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0178, 0.0162, 0.0131, 0.0163, 0.0124, 0.0148, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 00:24:01,303 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-27 00:24:09,414 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5213, 3.1333, 2.9480, 1.6412, 3.2883, 2.6218, 1.2406, 2.3192], device='cuda:6'), covar=tensor([0.3157, 0.2143, 0.1641, 0.3178, 0.1268, 0.0918, 0.3412, 0.1418], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0179, 0.0162, 0.0132, 0.0163, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 00:24:41,628 INFO [finetune.py:976] (6/7) Epoch 20, batch 2900, loss[loss=0.1697, simple_loss=0.2532, pruned_loss=0.04312, over 4880.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2471, pruned_loss=0.05293, over 955878.48 frames. ], batch size: 34, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:25:12,981 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.542e+02 1.820e+02 2.254e+02 5.949e+02, threshold=3.641e+02, percent-clipped=1.0 2023-03-27 00:25:31,431 INFO [finetune.py:976] (6/7) Epoch 20, batch 2950, loss[loss=0.1657, simple_loss=0.242, pruned_loss=0.04468, over 4821.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2476, pruned_loss=0.05301, over 951739.83 frames. ], batch size: 40, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:25:40,065 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=111791.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:26:03,396 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=111824.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:26:05,123 INFO [finetune.py:976] (6/7) Epoch 20, batch 3000, loss[loss=0.1753, simple_loss=0.2432, pruned_loss=0.05369, over 4763.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.25, pruned_loss=0.0542, over 950644.49 frames. ], batch size: 28, lr: 3.24e-03, grad_scale: 64.0 2023-03-27 00:26:05,123 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 00:26:20,307 INFO [finetune.py:1010] (6/7) Epoch 20, validation: loss=0.1563, simple_loss=0.2257, pruned_loss=0.04344, over 2265189.00 frames. 2023-03-27 00:26:20,307 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6345MB 2023-03-27 00:26:39,543 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8208, 1.9264, 2.3095, 2.1481, 1.9942, 4.4181, 1.9124, 1.9245], device='cuda:6'), covar=tensor([0.0913, 0.1633, 0.1134, 0.0901, 0.1485, 0.0188, 0.1332, 0.1746], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:26:40,753 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=111852.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:26:47,146 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 00:26:48,439 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.202e+02 1.618e+02 1.875e+02 2.230e+02 3.575e+02, threshold=3.749e+02, percent-clipped=0.0 2023-03-27 00:27:09,490 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.54 vs. limit=2.0 2023-03-27 00:27:11,149 INFO [finetune.py:976] (6/7) Epoch 20, batch 3050, loss[loss=0.1809, simple_loss=0.2531, pruned_loss=0.05433, over 4765.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2508, pruned_loss=0.05443, over 950418.12 frames. ], batch size: 28, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:27:18,693 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=111881.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:27:37,355 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6038, 1.4398, 2.1308, 3.2892, 2.1586, 2.3510, 0.9207, 2.6996], device='cuda:6'), covar=tensor([0.1769, 0.1554, 0.1324, 0.0602, 0.0859, 0.1476, 0.1897, 0.0485], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0100, 0.0134, 0.0123, 0.0098], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:27:38,205 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.04 vs. limit=5.0 2023-03-27 00:27:39,799 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7091, 1.6310, 1.6143, 1.6903, 1.4920, 3.2298, 1.6919, 2.0097], device='cuda:6'), covar=tensor([0.2852, 0.2059, 0.1888, 0.2016, 0.1476, 0.0255, 0.2573, 0.1011], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:28:14,418 INFO [finetune.py:976] (6/7) Epoch 20, batch 3100, loss[loss=0.1847, simple_loss=0.2517, pruned_loss=0.0588, over 4812.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2491, pruned_loss=0.05354, over 952717.38 frames. ], batch size: 41, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:28:15,706 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=111929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:28:21,610 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3690, 2.0280, 2.3384, 2.3389, 2.0336, 2.0299, 2.3182, 2.1674], device='cuda:6'), covar=tensor([0.3893, 0.4044, 0.3385, 0.3725, 0.5260, 0.4194, 0.4938, 0.3063], device='cuda:6'), in_proj_covar=tensor([0.0253, 0.0242, 0.0263, 0.0281, 0.0279, 0.0254, 0.0290, 0.0244], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:28:32,998 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.507e+02 1.737e+02 2.301e+02 5.151e+02, threshold=3.474e+02, percent-clipped=3.0 2023-03-27 00:28:47,144 INFO [finetune.py:976] (6/7) Epoch 20, batch 3150, loss[loss=0.1298, simple_loss=0.1851, pruned_loss=0.03729, over 4033.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2463, pruned_loss=0.05282, over 953019.50 frames. ], batch size: 17, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:29:23,069 INFO [finetune.py:976] (6/7) Epoch 20, batch 3200, loss[loss=0.1819, simple_loss=0.2513, pruned_loss=0.05627, over 4845.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2439, pruned_loss=0.05237, over 953790.33 frames. ], batch size: 49, lr: 3.24e-03, grad_scale: 32.0 2023-03-27 00:29:43,190 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5312, 1.3390, 1.2498, 1.5237, 1.5438, 1.5180, 0.9217, 1.2438], device='cuda:6'), covar=tensor([0.2324, 0.2302, 0.2198, 0.1785, 0.1745, 0.1410, 0.2923, 0.2070], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0213, 0.0194, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:29:52,255 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0390, 1.8500, 1.7469, 2.1338, 2.5164, 2.1840, 1.7046, 1.6593], device='cuda:6'), covar=tensor([0.2045, 0.2002, 0.1913, 0.1553, 0.1447, 0.1074, 0.2239, 0.1901], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0211, 0.0213, 0.0194, 0.0244, 0.0189, 0.0218, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:29:54,664 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2292, 2.0460, 1.8317, 2.0006, 1.9620, 1.9803, 2.0701, 2.7500], device='cuda:6'), covar=tensor([0.3639, 0.4447, 0.3048, 0.3972, 0.3989, 0.2416, 0.3822, 0.1632], device='cuda:6'), in_proj_covar=tensor([0.0285, 0.0261, 0.0229, 0.0275, 0.0250, 0.0221, 0.0250, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:29:56,972 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.705e+01 1.565e+02 1.739e+02 2.228e+02 3.922e+02, threshold=3.479e+02, percent-clipped=1.0 2023-03-27 00:30:06,303 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112066.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:30:13,834 INFO [finetune.py:976] (6/7) Epoch 20, batch 3250, loss[loss=0.2294, simple_loss=0.2934, pruned_loss=0.08268, over 4904.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2439, pruned_loss=0.05202, over 955249.87 frames. ], batch size: 36, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:30:34,856 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0109, 0.8886, 0.8520, 0.8913, 1.1784, 1.1587, 1.0137, 0.9112], device='cuda:6'), covar=tensor([0.0342, 0.0324, 0.0745, 0.0349, 0.0258, 0.0390, 0.0310, 0.0371], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0107, 0.0144, 0.0111, 0.0100, 0.0111, 0.0099, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.4803e-05, 8.2225e-05, 1.1322e-04, 8.5039e-05, 7.7596e-05, 8.1964e-05, 7.4073e-05, 8.5440e-05], device='cuda:6') 2023-03-27 00:30:54,516 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=112124.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:30:55,158 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1720, 2.2012, 1.7927, 2.1552, 2.0988, 2.0663, 2.1011, 2.8843], device='cuda:6'), covar=tensor([0.3817, 0.4714, 0.3454, 0.4346, 0.4401, 0.2429, 0.4565, 0.1669], device='cuda:6'), in_proj_covar=tensor([0.0284, 0.0260, 0.0229, 0.0275, 0.0250, 0.0220, 0.0250, 0.0231], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:30:56,192 INFO [finetune.py:976] (6/7) Epoch 20, batch 3300, loss[loss=0.2567, simple_loss=0.3178, pruned_loss=0.09779, over 4728.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2475, pruned_loss=0.05305, over 954090.09 frames. ], batch size: 59, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:30:56,313 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112127.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:31:10,260 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112147.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:31:12,074 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 00:31:16,121 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.319e+01 1.653e+02 1.982e+02 2.472e+02 3.934e+02, threshold=3.965e+02, percent-clipped=2.0 2023-03-27 00:31:26,941 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=112172.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:31:29,977 INFO [finetune.py:976] (6/7) Epoch 20, batch 3350, loss[loss=0.1718, simple_loss=0.2467, pruned_loss=0.04839, over 4760.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2486, pruned_loss=0.05306, over 953828.95 frames. ], batch size: 54, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:32:11,817 INFO [finetune.py:976] (6/7) Epoch 20, batch 3400, loss[loss=0.2128, simple_loss=0.2753, pruned_loss=0.07516, over 4883.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2509, pruned_loss=0.05414, over 954336.90 frames. ], batch size: 35, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:32:28,441 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.4263, 2.9847, 2.7401, 1.3887, 2.9278, 2.3250, 2.2684, 2.6754], device='cuda:6'), covar=tensor([0.0677, 0.0897, 0.1552, 0.2173, 0.1519, 0.2146, 0.2078, 0.1013], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0200, 0.0182, 0.0211, 0.0207, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:32:29,029 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6224, 1.4735, 1.9248, 2.9980, 1.9733, 2.3455, 1.1376, 2.5307], device='cuda:6'), covar=tensor([0.1688, 0.1371, 0.1219, 0.0650, 0.0857, 0.1095, 0.1631, 0.0474], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0134, 0.0123, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:32:31,202 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.676e+02 1.964e+02 2.392e+02 4.564e+02, threshold=3.928e+02, percent-clipped=2.0 2023-03-27 00:32:31,346 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8801, 1.8355, 1.6581, 2.0429, 2.3600, 2.0049, 1.7126, 1.4929], device='cuda:6'), covar=tensor([0.2243, 0.2006, 0.1924, 0.1672, 0.1780, 0.1242, 0.2376, 0.2058], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0213, 0.0195, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:32:34,359 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4362, 1.5593, 1.2511, 1.4961, 1.8985, 1.7565, 1.5382, 1.3953], device='cuda:6'), covar=tensor([0.0410, 0.0327, 0.0670, 0.0296, 0.0194, 0.0586, 0.0336, 0.0411], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0108, 0.0145, 0.0112, 0.0101, 0.0112, 0.0100, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.5499e-05, 8.2920e-05, 1.1431e-04, 8.5745e-05, 7.8383e-05, 8.2965e-05, 7.4686e-05, 8.6228e-05], device='cuda:6') 2023-03-27 00:32:44,380 INFO [finetune.py:976] (6/7) Epoch 20, batch 3450, loss[loss=0.1792, simple_loss=0.2475, pruned_loss=0.05546, over 4122.00 frames. ], tot_loss[loss=0.1794, simple_loss=0.251, pruned_loss=0.05394, over 954888.28 frames. ], batch size: 18, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:33:19,430 INFO [finetune.py:976] (6/7) Epoch 20, batch 3500, loss[loss=0.1498, simple_loss=0.2085, pruned_loss=0.04557, over 4058.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.247, pruned_loss=0.05279, over 954046.58 frames. ], batch size: 17, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:33:29,344 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5216, 3.3226, 3.2511, 1.6033, 3.4468, 2.6220, 0.9731, 2.4974], device='cuda:6'), covar=tensor([0.2159, 0.2240, 0.1562, 0.3405, 0.1070, 0.1013, 0.4101, 0.1407], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0177, 0.0160, 0.0129, 0.0162, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 00:33:56,443 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.856e+01 1.454e+02 1.799e+02 2.150e+02 5.052e+02, threshold=3.598e+02, percent-clipped=2.0 2023-03-27 00:34:18,959 INFO [finetune.py:976] (6/7) Epoch 20, batch 3550, loss[loss=0.1572, simple_loss=0.2233, pruned_loss=0.04555, over 4150.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2449, pruned_loss=0.05227, over 955236.75 frames. ], batch size: 65, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:34:28,784 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3976, 1.4232, 1.6952, 1.6851, 1.5801, 2.7294, 1.3767, 1.5486], device='cuda:6'), covar=tensor([0.0923, 0.1594, 0.1261, 0.0871, 0.1422, 0.0327, 0.1380, 0.1583], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:34:36,592 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112389.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:35:17,938 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112422.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:35:20,889 INFO [finetune.py:976] (6/7) Epoch 20, batch 3600, loss[loss=0.1433, simple_loss=0.2137, pruned_loss=0.03647, over 4769.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2422, pruned_loss=0.05153, over 954567.71 frames. ], batch size: 28, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:35:32,696 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.62 vs. limit=2.0 2023-03-27 00:35:37,947 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=112447.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:35:39,843 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:35:44,342 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.527e+02 1.833e+02 2.137e+02 4.874e+02, threshold=3.667e+02, percent-clipped=1.0 2023-03-27 00:36:08,334 INFO [finetune.py:976] (6/7) Epoch 20, batch 3650, loss[loss=0.2182, simple_loss=0.2856, pruned_loss=0.07541, over 4837.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2443, pruned_loss=0.05218, over 954167.39 frames. ], batch size: 47, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:36:20,409 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=112495.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:36:41,250 INFO [finetune.py:976] (6/7) Epoch 20, batch 3700, loss[loss=0.1755, simple_loss=0.2484, pruned_loss=0.05135, over 4904.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2479, pruned_loss=0.0532, over 953415.81 frames. ], batch size: 37, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:37:01,297 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.006e+02 1.568e+02 1.941e+02 2.323e+02 3.962e+02, threshold=3.882e+02, percent-clipped=1.0 2023-03-27 00:37:03,790 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0190, 1.6621, 1.8200, 0.6982, 2.1489, 2.4258, 1.9569, 1.6898], device='cuda:6'), covar=tensor([0.1002, 0.1109, 0.0647, 0.0773, 0.0708, 0.0530, 0.0650, 0.0737], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0125, 0.0124, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.0439e-05, 1.0861e-04, 8.9138e-05, 8.7751e-05, 9.1464e-05, 9.2601e-05, 1.0136e-04, 1.0592e-04], device='cuda:6') 2023-03-27 00:37:05,565 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112562.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:10,865 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5265, 1.6362, 1.3378, 1.4825, 1.9678, 1.8709, 1.6369, 1.4624], device='cuda:6'), covar=tensor([0.0355, 0.0312, 0.0547, 0.0326, 0.0203, 0.0489, 0.0341, 0.0370], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0107, 0.0144, 0.0111, 0.0100, 0.0111, 0.0099, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.5094e-05, 8.2293e-05, 1.1336e-04, 8.5197e-05, 7.7940e-05, 8.2365e-05, 7.4049e-05, 8.5713e-05], device='cuda:6') 2023-03-27 00:37:15,916 INFO [finetune.py:976] (6/7) Epoch 20, batch 3750, loss[loss=0.1656, simple_loss=0.2215, pruned_loss=0.05483, over 3982.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2497, pruned_loss=0.05354, over 953644.30 frames. ], batch size: 17, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:37:16,037 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112577.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:50,125 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112617.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:50,432 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.20 vs. limit=5.0 2023-03-27 00:37:54,222 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112623.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:37:57,419 INFO [finetune.py:976] (6/7) Epoch 20, batch 3800, loss[loss=0.2021, simple_loss=0.2655, pruned_loss=0.06934, over 4882.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2506, pruned_loss=0.05366, over 953178.72 frames. ], batch size: 32, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:38:03,780 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 00:38:04,232 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112638.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:38:16,043 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.507e+01 1.515e+02 1.800e+02 2.233e+02 3.828e+02, threshold=3.600e+02, percent-clipped=0.0 2023-03-27 00:38:30,666 INFO [finetune.py:976] (6/7) Epoch 20, batch 3850, loss[loss=0.1654, simple_loss=0.2384, pruned_loss=0.04617, over 4905.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2493, pruned_loss=0.05344, over 954000.58 frames. ], batch size: 43, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:38:31,386 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112678.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:38:59,755 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=112722.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:39:03,156 INFO [finetune.py:976] (6/7) Epoch 20, batch 3900, loss[loss=0.1628, simple_loss=0.2315, pruned_loss=0.04703, over 4906.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2465, pruned_loss=0.053, over 953992.30 frames. ], batch size: 36, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:39:15,037 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112745.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:39:21,551 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.403e+01 1.612e+02 1.959e+02 2.410e+02 4.123e+02, threshold=3.918e+02, percent-clipped=1.0 2023-03-27 00:39:32,078 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=112770.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:39:36,268 INFO [finetune.py:976] (6/7) Epoch 20, batch 3950, loss[loss=0.1784, simple_loss=0.2441, pruned_loss=0.05633, over 4829.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2443, pruned_loss=0.0524, over 954613.64 frames. ], batch size: 30, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:39:51,977 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0714, 2.0176, 1.5199, 0.7226, 1.7202, 1.7699, 1.6408, 1.8650], device='cuda:6'), covar=tensor([0.0888, 0.0595, 0.1446, 0.1743, 0.1223, 0.1981, 0.1901, 0.0719], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0191, 0.0198, 0.0181, 0.0210, 0.0208, 0.0221, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:40:19,039 INFO [finetune.py:976] (6/7) Epoch 20, batch 4000, loss[loss=0.1359, simple_loss=0.2149, pruned_loss=0.02845, over 4771.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2447, pruned_loss=0.05306, over 952076.46 frames. ], batch size: 28, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:40:48,439 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.638e+02 1.904e+02 2.320e+02 3.891e+02, threshold=3.808e+02, percent-clipped=0.0 2023-03-27 00:41:04,948 INFO [finetune.py:976] (6/7) Epoch 20, batch 4050, loss[loss=0.2126, simple_loss=0.2796, pruned_loss=0.07283, over 4735.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2472, pruned_loss=0.05442, over 950653.37 frames. ], batch size: 59, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:41:41,252 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112918.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:41:43,108 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=112921.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:41:47,212 INFO [finetune.py:976] (6/7) Epoch 20, batch 4100, loss[loss=0.1971, simple_loss=0.2696, pruned_loss=0.06232, over 4752.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.249, pruned_loss=0.05429, over 952110.08 frames. ], batch size: 28, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:41:51,359 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112933.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 00:42:06,078 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.570e+02 1.869e+02 2.355e+02 4.214e+02, threshold=3.739e+02, percent-clipped=0.0 2023-03-27 00:42:17,477 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=112973.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 00:42:19,818 INFO [finetune.py:976] (6/7) Epoch 20, batch 4150, loss[loss=0.1935, simple_loss=0.2664, pruned_loss=0.06028, over 4890.00 frames. ], tot_loss[loss=0.1797, simple_loss=0.2502, pruned_loss=0.05459, over 952723.74 frames. ], batch size: 35, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:42:23,982 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=112982.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:43:03,967 INFO [finetune.py:976] (6/7) Epoch 20, batch 4200, loss[loss=0.1777, simple_loss=0.2518, pruned_loss=0.05182, over 4900.00 frames. ], tot_loss[loss=0.18, simple_loss=0.251, pruned_loss=0.05454, over 953235.84 frames. ], batch size: 36, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:43:07,999 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-27 00:43:16,387 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113045.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:43:20,585 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 00:43:23,433 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.578e+02 1.900e+02 2.258e+02 5.235e+02, threshold=3.800e+02, percent-clipped=4.0 2023-03-27 00:43:36,994 INFO [finetune.py:976] (6/7) Epoch 20, batch 4250, loss[loss=0.1749, simple_loss=0.2401, pruned_loss=0.05486, over 4784.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2495, pruned_loss=0.05424, over 954633.67 frames. ], batch size: 51, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:43:43,334 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-27 00:43:47,720 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113093.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:44:10,288 INFO [finetune.py:976] (6/7) Epoch 20, batch 4300, loss[loss=0.174, simple_loss=0.2382, pruned_loss=0.05493, over 4867.00 frames. ], tot_loss[loss=0.176, simple_loss=0.246, pruned_loss=0.05297, over 953383.89 frames. ], batch size: 34, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:44:16,550 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 00:44:30,856 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.679e+01 1.396e+02 1.741e+02 2.023e+02 4.034e+02, threshold=3.482e+02, percent-clipped=1.0 2023-03-27 00:44:43,636 INFO [finetune.py:976] (6/7) Epoch 20, batch 4350, loss[loss=0.171, simple_loss=0.2451, pruned_loss=0.04847, over 4863.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2417, pruned_loss=0.05116, over 952433.01 frames. ], batch size: 44, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:44:48,034 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 00:45:12,196 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:45:17,616 INFO [finetune.py:976] (6/7) Epoch 20, batch 4400, loss[loss=0.1521, simple_loss=0.2081, pruned_loss=0.04811, over 4039.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.242, pruned_loss=0.05147, over 950548.55 frames. ], batch size: 17, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:45:21,857 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113233.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:45:21,874 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4033, 2.2766, 1.9781, 0.9792, 2.0865, 1.7475, 1.7052, 2.0847], device='cuda:6'), covar=tensor([0.0936, 0.0809, 0.1872, 0.2131, 0.1584, 0.2561, 0.2383, 0.1068], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0199, 0.0182, 0.0211, 0.0209, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:45:22,450 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=113234.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:45:49,232 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.729e+01 1.603e+02 1.846e+02 2.173e+02 5.642e+02, threshold=3.692e+02, percent-clipped=4.0 2023-03-27 00:45:59,557 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4445, 1.5232, 1.8733, 1.7473, 1.6932, 3.3949, 1.5409, 1.6437], device='cuda:6'), covar=tensor([0.1043, 0.1784, 0.1004, 0.0999, 0.1597, 0.0233, 0.1516, 0.1860], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:46:00,753 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113266.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:08,363 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113273.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 00:46:10,703 INFO [finetune.py:976] (6/7) Epoch 20, batch 4450, loss[loss=0.209, simple_loss=0.2684, pruned_loss=0.07475, over 4929.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2466, pruned_loss=0.05336, over 947850.43 frames. ], batch size: 33, lr: 3.23e-03, grad_scale: 32.0 2023-03-27 00:46:10,774 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=113277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:11,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9895, 1.9861, 1.7372, 2.0757, 2.6017, 2.0197, 2.1125, 1.5452], device='cuda:6'), covar=tensor([0.2076, 0.1762, 0.1732, 0.1481, 0.1613, 0.1116, 0.1835, 0.1672], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0209, 0.0211, 0.0193, 0.0241, 0.0186, 0.0216, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:46:12,680 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0861, 1.9924, 1.9278, 0.9691, 2.2572, 2.5197, 2.1784, 1.8157], device='cuda:6'), covar=tensor([0.1158, 0.0851, 0.0594, 0.0761, 0.0549, 0.0790, 0.0483, 0.0974], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0124, 0.0123, 0.0129, 0.0128, 0.0140, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.9801e-05, 1.0813e-04, 8.8727e-05, 8.7072e-05, 9.0721e-05, 9.1731e-05, 1.0011e-04, 1.0551e-04], device='cuda:6') 2023-03-27 00:46:13,204 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113281.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:23,172 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=113295.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:49,987 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 00:46:50,260 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:46:53,839 INFO [finetune.py:976] (6/7) Epoch 20, batch 4500, loss[loss=0.1862, simple_loss=0.2567, pruned_loss=0.05786, over 4819.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.2494, pruned_loss=0.05424, over 949822.58 frames. ], batch size: 39, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:46:56,436 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7077, 1.4360, 0.8664, 1.5054, 2.0244, 1.0712, 1.4982, 1.6917], device='cuda:6'), covar=tensor([0.1491, 0.1813, 0.1797, 0.1185, 0.1891, 0.1968, 0.1296, 0.1899], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0091, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:47:13,424 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.617e+02 2.027e+02 2.372e+02 5.258e+02, threshold=4.055e+02, percent-clipped=2.0 2023-03-27 00:47:27,583 INFO [finetune.py:976] (6/7) Epoch 20, batch 4550, loss[loss=0.1748, simple_loss=0.2456, pruned_loss=0.05202, over 4816.00 frames. ], tot_loss[loss=0.1804, simple_loss=0.2512, pruned_loss=0.05475, over 951041.54 frames. ], batch size: 33, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:47:29,615 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 00:47:45,015 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.44 vs. limit=5.0 2023-03-27 00:47:53,136 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.55 vs. limit=5.0 2023-03-27 00:48:00,725 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 00:48:03,325 INFO [finetune.py:976] (6/7) Epoch 20, batch 4600, loss[loss=0.1734, simple_loss=0.2501, pruned_loss=0.04835, over 4845.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.251, pruned_loss=0.0546, over 952833.88 frames. ], batch size: 47, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:48:11,465 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6856, 1.7265, 1.6523, 1.0434, 1.8221, 2.0279, 1.9212, 1.4802], device='cuda:6'), covar=tensor([0.0927, 0.0597, 0.0535, 0.0539, 0.0415, 0.0702, 0.0343, 0.0759], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0124, 0.0123, 0.0129, 0.0128, 0.0140, 0.0147], device='cuda:6'), out_proj_covar=tensor([9.0027e-05, 1.0812e-04, 8.8931e-05, 8.6984e-05, 9.0835e-05, 9.1763e-05, 1.0074e-04, 1.0572e-04], device='cuda:6') 2023-03-27 00:48:31,341 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.757e+01 1.614e+02 1.832e+02 2.145e+02 3.668e+02, threshold=3.664e+02, percent-clipped=0.0 2023-03-27 00:48:45,584 INFO [finetune.py:976] (6/7) Epoch 20, batch 4650, loss[loss=0.1547, simple_loss=0.2308, pruned_loss=0.03929, over 4896.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2479, pruned_loss=0.0535, over 954844.15 frames. ], batch size: 35, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:49:19,010 INFO [finetune.py:976] (6/7) Epoch 20, batch 4700, loss[loss=0.1501, simple_loss=0.221, pruned_loss=0.03965, over 4837.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.245, pruned_loss=0.05247, over 955122.21 frames. ], batch size: 47, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:49:37,206 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.713e+01 1.518e+02 1.819e+02 2.172e+02 5.123e+02, threshold=3.639e+02, percent-clipped=2.0 2023-03-27 00:49:51,489 INFO [finetune.py:976] (6/7) Epoch 20, batch 4750, loss[loss=0.1489, simple_loss=0.2187, pruned_loss=0.03955, over 4901.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.242, pruned_loss=0.05135, over 954197.71 frames. ], batch size: 35, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:49:52,069 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113577.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:00,075 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=113590.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:06,689 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1950, 1.9700, 2.1363, 1.0321, 2.4959, 2.8039, 2.1199, 1.9393], device='cuda:6'), covar=tensor([0.0997, 0.1105, 0.0665, 0.0766, 0.0724, 0.0804, 0.0680, 0.0817], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0124, 0.0123, 0.0129, 0.0128, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.9854e-05, 1.0845e-04, 8.8986e-05, 8.7060e-05, 9.1008e-05, 9.1734e-05, 1.0103e-04, 1.0570e-04], device='cuda:6') 2023-03-27 00:50:18,364 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=113617.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:23,694 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113625.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:24,846 INFO [finetune.py:976] (6/7) Epoch 20, batch 4800, loss[loss=0.1723, simple_loss=0.2443, pruned_loss=0.05013, over 4772.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2463, pruned_loss=0.05347, over 954614.74 frames. ], batch size: 26, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:50:37,470 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=113646.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:50:40,317 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 00:50:43,816 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.583e+02 1.918e+02 2.188e+02 4.674e+02, threshold=3.836e+02, percent-clipped=2.0 2023-03-27 00:50:57,264 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 00:51:01,327 INFO [finetune.py:976] (6/7) Epoch 20, batch 4850, loss[loss=0.2137, simple_loss=0.2783, pruned_loss=0.07456, over 4819.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2488, pruned_loss=0.05433, over 953184.79 frames. ], batch size: 40, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:51:02,057 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=113678.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:51:34,396 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7582, 4.0123, 3.7963, 1.6103, 4.1181, 3.0900, 1.0977, 2.7823], device='cuda:6'), covar=tensor([0.2386, 0.1982, 0.1612, 0.3556, 0.0973, 0.0979, 0.4170, 0.1420], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0178, 0.0161, 0.0130, 0.0162, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 00:51:34,439 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=113707.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:51:57,944 INFO [finetune.py:976] (6/7) Epoch 20, batch 4900, loss[loss=0.1574, simple_loss=0.2389, pruned_loss=0.03794, over 4928.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2498, pruned_loss=0.05403, over 955610.11 frames. ], batch size: 42, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:52:07,539 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-27 00:52:20,124 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.649e+02 2.053e+02 2.498e+02 4.513e+02, threshold=4.105e+02, percent-clipped=3.0 2023-03-27 00:52:23,723 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4657, 2.3303, 2.0757, 2.3083, 2.2172, 2.2778, 2.2493, 2.8667], device='cuda:6'), covar=tensor([0.3248, 0.3953, 0.3096, 0.3440, 0.3713, 0.2332, 0.3571, 0.1626], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0232, 0.0277, 0.0253, 0.0223, 0.0252, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:52:34,814 INFO [finetune.py:976] (6/7) Epoch 20, batch 4950, loss[loss=0.1808, simple_loss=0.261, pruned_loss=0.05031, over 4917.00 frames. ], tot_loss[loss=0.1803, simple_loss=0.2514, pruned_loss=0.05462, over 954343.73 frames. ], batch size: 37, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:52:51,650 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0067, 1.9203, 1.6034, 1.9703, 1.9822, 1.6910, 2.1473, 2.0227], device='cuda:6'), covar=tensor([0.1334, 0.1874, 0.2772, 0.2298, 0.2407, 0.1674, 0.3076, 0.1628], device='cuda:6'), in_proj_covar=tensor([0.0185, 0.0186, 0.0233, 0.0251, 0.0245, 0.0203, 0.0213, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:53:07,554 INFO [finetune.py:976] (6/7) Epoch 20, batch 5000, loss[loss=0.1499, simple_loss=0.2266, pruned_loss=0.03655, over 4787.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2499, pruned_loss=0.05421, over 952210.16 frames. ], batch size: 29, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:53:26,539 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.516e+02 1.779e+02 2.023e+02 5.156e+02, threshold=3.559e+02, percent-clipped=2.0 2023-03-27 00:53:32,810 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5147, 1.3382, 1.2353, 1.4378, 1.7166, 1.5077, 0.9112, 1.2751], device='cuda:6'), covar=tensor([0.2223, 0.2237, 0.2156, 0.1828, 0.1497, 0.1437, 0.2827, 0.1998], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0209, 0.0211, 0.0194, 0.0242, 0.0186, 0.0216, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:53:42,069 INFO [finetune.py:976] (6/7) Epoch 20, batch 5050, loss[loss=0.1524, simple_loss=0.225, pruned_loss=0.03986, over 4863.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2481, pruned_loss=0.05371, over 954350.52 frames. ], batch size: 31, lr: 3.22e-03, grad_scale: 64.0 2023-03-27 00:53:48,139 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7736, 1.6146, 1.8124, 1.2225, 1.8612, 2.0013, 1.9042, 1.2892], device='cuda:6'), covar=tensor([0.0679, 0.0979, 0.0710, 0.1017, 0.0831, 0.0587, 0.0629, 0.1766], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0135, 0.0139, 0.0120, 0.0124, 0.0138, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:53:51,008 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=113890.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:54:14,769 INFO [finetune.py:976] (6/7) Epoch 20, batch 5100, loss[loss=0.1666, simple_loss=0.2315, pruned_loss=0.0509, over 4766.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2441, pruned_loss=0.05217, over 955237.16 frames. ], batch size: 28, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:54:23,015 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=113938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:54:35,329 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7044, 1.8016, 1.4623, 1.8513, 2.1748, 1.9362, 1.4701, 1.4321], device='cuda:6'), covar=tensor([0.2260, 0.1985, 0.1967, 0.1650, 0.1712, 0.1164, 0.2350, 0.1999], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0209, 0.0211, 0.0193, 0.0242, 0.0187, 0.0216, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:54:35,765 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.567e+02 1.826e+02 2.193e+02 3.507e+02, threshold=3.652e+02, percent-clipped=0.0 2023-03-27 00:54:37,185 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 00:54:46,079 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=113973.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:54:48,407 INFO [finetune.py:976] (6/7) Epoch 20, batch 5150, loss[loss=0.1858, simple_loss=0.2588, pruned_loss=0.05638, over 4903.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2452, pruned_loss=0.053, over 954602.05 frames. ], batch size: 43, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:55:07,780 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=114002.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:55:23,294 INFO [finetune.py:976] (6/7) Epoch 20, batch 5200, loss[loss=0.2166, simple_loss=0.284, pruned_loss=0.07459, over 4770.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2498, pruned_loss=0.05417, over 957006.20 frames. ], batch size: 59, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:55:43,822 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.581e+02 1.814e+02 2.243e+02 3.815e+02, threshold=3.628e+02, percent-clipped=1.0 2023-03-27 00:55:44,055 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.37 vs. limit=5.0 2023-03-27 00:55:56,415 INFO [finetune.py:976] (6/7) Epoch 20, batch 5250, loss[loss=0.1638, simple_loss=0.2361, pruned_loss=0.04578, over 4755.00 frames. ], tot_loss[loss=0.1812, simple_loss=0.2519, pruned_loss=0.05522, over 956504.61 frames. ], batch size: 28, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:56:19,910 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 00:56:36,234 INFO [finetune.py:976] (6/7) Epoch 20, batch 5300, loss[loss=0.1768, simple_loss=0.2477, pruned_loss=0.05292, over 4815.00 frames. ], tot_loss[loss=0.1809, simple_loss=0.2517, pruned_loss=0.05506, over 953484.52 frames. ], batch size: 38, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:56:36,362 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=114127.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:56:57,476 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 00:57:13,342 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.589e+01 1.501e+02 1.834e+02 2.338e+02 4.244e+02, threshold=3.669e+02, percent-clipped=1.0 2023-03-27 00:57:13,667 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-27 00:57:30,110 INFO [finetune.py:976] (6/7) Epoch 20, batch 5350, loss[loss=0.1634, simple_loss=0.2285, pruned_loss=0.04916, over 4743.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2511, pruned_loss=0.05434, over 953822.92 frames. ], batch size: 26, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:57:37,359 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=114188.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:57:47,477 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9029, 1.3048, 0.8039, 1.7393, 2.1740, 1.5803, 1.5764, 1.8378], device='cuda:6'), covar=tensor([0.1465, 0.2087, 0.2047, 0.1145, 0.1838, 0.1952, 0.1444, 0.1952], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0094, 0.0110, 0.0091, 0.0119, 0.0092, 0.0096, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 00:57:50,956 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9780, 1.8384, 1.6539, 1.9967, 2.6355, 2.0196, 2.1293, 1.6176], device='cuda:6'), covar=tensor([0.2324, 0.2237, 0.2156, 0.1837, 0.1758, 0.1406, 0.2011, 0.1955], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0243, 0.0187, 0.0217, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:58:03,280 INFO [finetune.py:976] (6/7) Epoch 20, batch 5400, loss[loss=0.1623, simple_loss=0.2221, pruned_loss=0.05127, over 4859.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2478, pruned_loss=0.05327, over 953853.98 frames. ], batch size: 49, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:58:14,069 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6593, 1.3894, 1.9867, 1.8820, 1.5092, 3.4649, 1.2537, 1.4472], device='cuda:6'), covar=tensor([0.1137, 0.2598, 0.1415, 0.1155, 0.2004, 0.0264, 0.2186, 0.2596], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 00:58:23,339 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.275e+01 1.521e+02 1.786e+02 2.103e+02 5.074e+02, threshold=3.573e+02, percent-clipped=1.0 2023-03-27 00:58:30,073 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9964, 1.7189, 2.0720, 1.4294, 2.0164, 2.1447, 1.5782, 2.3691], device='cuda:6'), covar=tensor([0.1024, 0.1844, 0.1270, 0.1785, 0.0759, 0.1269, 0.2638, 0.0692], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0206, 0.0190, 0.0189, 0.0174, 0.0213, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 00:58:33,650 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=114273.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:58:35,984 INFO [finetune.py:976] (6/7) Epoch 20, batch 5450, loss[loss=0.1359, simple_loss=0.213, pruned_loss=0.02938, over 4910.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.246, pruned_loss=0.05266, over 954730.49 frames. ], batch size: 36, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:58:51,608 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=114302.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:59:05,065 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=114321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:59:08,650 INFO [finetune.py:976] (6/7) Epoch 20, batch 5500, loss[loss=0.1614, simple_loss=0.224, pruned_loss=0.04938, over 4902.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2439, pruned_loss=0.05267, over 953810.52 frames. ], batch size: 37, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 00:59:16,345 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 00:59:23,096 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=114350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 00:59:27,725 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.653e+01 1.524e+02 1.755e+02 2.254e+02 4.886e+02, threshold=3.510e+02, percent-clipped=3.0 2023-03-27 00:59:42,372 INFO [finetune.py:976] (6/7) Epoch 20, batch 5550, loss[loss=0.1792, simple_loss=0.247, pruned_loss=0.05571, over 4869.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2442, pruned_loss=0.0524, over 951812.46 frames. ], batch size: 31, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:00:14,062 INFO [finetune.py:976] (6/7) Epoch 20, batch 5600, loss[loss=0.1394, simple_loss=0.2225, pruned_loss=0.02811, over 4788.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2469, pruned_loss=0.0529, over 951507.09 frames. ], batch size: 29, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:00:31,887 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.587e+02 1.911e+02 2.256e+02 4.682e+02, threshold=3.822e+02, percent-clipped=4.0 2023-03-27 01:00:43,490 INFO [finetune.py:976] (6/7) Epoch 20, batch 5650, loss[loss=0.2125, simple_loss=0.2805, pruned_loss=0.07227, over 4886.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2498, pruned_loss=0.05372, over 951063.35 frames. ], batch size: 32, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:00:46,986 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=114483.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:00:57,539 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7609, 2.3508, 1.7925, 0.8948, 2.1208, 2.1529, 1.8045, 2.1467], device='cuda:6'), covar=tensor([0.0715, 0.0913, 0.1692, 0.2178, 0.1131, 0.2259, 0.2347, 0.0985], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0190, 0.0198, 0.0181, 0.0209, 0.0209, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:01:04,503 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2332, 1.6690, 1.1071, 1.8965, 2.3684, 1.7307, 1.8045, 1.9433], device='cuda:6'), covar=tensor([0.1295, 0.1918, 0.1807, 0.1095, 0.1753, 0.1820, 0.1356, 0.1948], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0091, 0.0120, 0.0093, 0.0097, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 01:01:13,266 INFO [finetune.py:976] (6/7) Epoch 20, batch 5700, loss[loss=0.1376, simple_loss=0.2097, pruned_loss=0.0327, over 3940.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2458, pruned_loss=0.05227, over 937536.90 frames. ], batch size: 17, lr: 3.22e-03, grad_scale: 32.0 2023-03-27 01:01:39,092 INFO [finetune.py:976] (6/7) Epoch 21, batch 0, loss[loss=0.1297, simple_loss=0.2066, pruned_loss=0.02637, over 4708.00 frames. ], tot_loss[loss=0.1297, simple_loss=0.2066, pruned_loss=0.02637, over 4708.00 frames. ], batch size: 23, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:01:39,092 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 01:01:46,356 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8746, 3.4700, 3.5795, 3.7917, 3.6064, 3.4187, 3.9413, 1.3164], device='cuda:6'), covar=tensor([0.0854, 0.0780, 0.0832, 0.0806, 0.1480, 0.1665, 0.0739, 0.5376], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0241, 0.0277, 0.0289, 0.0330, 0.0283, 0.0301, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:01:52,342 INFO [finetune.py:1010] (6/7) Epoch 21, validation: loss=0.1598, simple_loss=0.2277, pruned_loss=0.0459, over 2265189.00 frames. 2023-03-27 01:01:52,342 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6385MB 2023-03-27 01:01:56,943 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.573e+01 1.356e+02 1.658e+02 2.014e+02 3.472e+02, threshold=3.316e+02, percent-clipped=0.0 2023-03-27 01:02:29,067 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8504, 1.2747, 1.8999, 1.8473, 1.6149, 1.5585, 1.7480, 1.8076], device='cuda:6'), covar=tensor([0.3315, 0.3312, 0.2702, 0.3050, 0.4048, 0.3504, 0.3692, 0.2539], device='cuda:6'), in_proj_covar=tensor([0.0254, 0.0243, 0.0263, 0.0281, 0.0280, 0.0255, 0.0290, 0.0245], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:02:47,784 INFO [finetune.py:976] (6/7) Epoch 21, batch 50, loss[loss=0.1709, simple_loss=0.2396, pruned_loss=0.05112, over 4895.00 frames. ], tot_loss[loss=0.1877, simple_loss=0.2582, pruned_loss=0.05865, over 214863.02 frames. ], batch size: 36, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:03:04,151 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9820, 1.9232, 1.6065, 1.6591, 1.8204, 1.7557, 1.8084, 2.4561], device='cuda:6'), covar=tensor([0.3712, 0.3628, 0.3154, 0.3375, 0.3704, 0.2309, 0.3561, 0.1748], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0231, 0.0276, 0.0252, 0.0222, 0.0252, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:03:12,725 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.63 vs. limit=2.0 2023-03-27 01:03:21,579 INFO [finetune.py:976] (6/7) Epoch 21, batch 100, loss[loss=0.2269, simple_loss=0.2807, pruned_loss=0.08658, over 4829.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.247, pruned_loss=0.05533, over 378624.73 frames. ], batch size: 47, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:03:23,374 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.865e+01 1.560e+02 1.971e+02 2.354e+02 5.080e+02, threshold=3.943e+02, percent-clipped=2.0 2023-03-27 01:03:24,050 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 01:03:54,253 INFO [finetune.py:976] (6/7) Epoch 21, batch 150, loss[loss=0.1471, simple_loss=0.2209, pruned_loss=0.03672, over 4865.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2412, pruned_loss=0.05194, over 507143.52 frames. ], batch size: 34, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:04:02,529 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=114716.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:04:03,459 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-27 01:04:16,923 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1247, 1.9098, 1.4251, 0.6154, 1.6431, 1.7178, 1.5954, 1.7541], device='cuda:6'), covar=tensor([0.0923, 0.0900, 0.1802, 0.2209, 0.1569, 0.2832, 0.2713, 0.1072], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0191, 0.0198, 0.0182, 0.0209, 0.0209, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:04:26,908 INFO [finetune.py:976] (6/7) Epoch 21, batch 200, loss[loss=0.1672, simple_loss=0.2385, pruned_loss=0.04796, over 4778.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2416, pruned_loss=0.0521, over 605490.34 frames. ], batch size: 28, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:04:29,188 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.562e+02 1.886e+02 2.300e+02 5.249e+02, threshold=3.772e+02, percent-clipped=1.0 2023-03-27 01:04:42,932 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=114777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:04:46,598 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=114783.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:05:00,769 INFO [finetune.py:976] (6/7) Epoch 21, batch 250, loss[loss=0.1817, simple_loss=0.2521, pruned_loss=0.05564, over 4866.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2435, pruned_loss=0.05282, over 683722.39 frames. ], batch size: 31, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:05:19,225 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=114831.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:05:33,147 INFO [finetune.py:976] (6/7) Epoch 21, batch 300, loss[loss=0.176, simple_loss=0.2317, pruned_loss=0.06014, over 4763.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2465, pruned_loss=0.05335, over 743284.91 frames. ], batch size: 26, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:05:36,360 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.093e+02 1.500e+02 1.787e+02 2.137e+02 3.935e+02, threshold=3.575e+02, percent-clipped=3.0 2023-03-27 01:05:45,145 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5842, 1.4315, 1.4620, 1.4872, 0.9316, 2.9955, 1.1495, 1.5283], device='cuda:6'), covar=tensor([0.3415, 0.2507, 0.2157, 0.2550, 0.1945, 0.0265, 0.2725, 0.1323], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:05:46,342 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=114871.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:05:51,808 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5286, 1.6305, 1.2953, 1.5122, 1.8882, 1.7474, 1.5837, 1.3635], device='cuda:6'), covar=tensor([0.0341, 0.0279, 0.0554, 0.0297, 0.0200, 0.0445, 0.0302, 0.0374], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0108, 0.0145, 0.0112, 0.0100, 0.0111, 0.0100, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.5750e-05, 8.2828e-05, 1.1374e-04, 8.5712e-05, 7.8034e-05, 8.2437e-05, 7.4557e-05, 8.5994e-05], device='cuda:6') 2023-03-27 01:06:06,528 INFO [finetune.py:976] (6/7) Epoch 21, batch 350, loss[loss=0.2127, simple_loss=0.2846, pruned_loss=0.07045, over 4922.00 frames. ], tot_loss[loss=0.1789, simple_loss=0.249, pruned_loss=0.05436, over 789856.63 frames. ], batch size: 42, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:06:26,472 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=114932.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:06:40,074 INFO [finetune.py:976] (6/7) Epoch 21, batch 400, loss[loss=0.1516, simple_loss=0.2333, pruned_loss=0.03492, over 4924.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2501, pruned_loss=0.05428, over 824830.36 frames. ], batch size: 42, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:06:41,872 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.627e+02 1.950e+02 2.383e+02 4.205e+02, threshold=3.900e+02, percent-clipped=3.0 2023-03-27 01:07:20,652 INFO [finetune.py:976] (6/7) Epoch 21, batch 450, loss[loss=0.1619, simple_loss=0.2344, pruned_loss=0.04468, over 4789.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2492, pruned_loss=0.054, over 854544.40 frames. ], batch size: 29, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:08:02,754 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2935, 3.6749, 3.9119, 4.1625, 4.0422, 3.8052, 4.4200, 1.3222], device='cuda:6'), covar=tensor([0.0790, 0.0876, 0.0857, 0.0941, 0.1234, 0.1620, 0.0660, 0.5531], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0242, 0.0277, 0.0291, 0.0330, 0.0282, 0.0302, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:08:06,648 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.24 vs. limit=5.0 2023-03-27 01:08:11,192 INFO [finetune.py:976] (6/7) Epoch 21, batch 500, loss[loss=0.1785, simple_loss=0.2472, pruned_loss=0.05488, over 4806.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.247, pruned_loss=0.05339, over 877958.18 frames. ], batch size: 45, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:08:13,015 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.154e+02 1.445e+02 1.728e+02 2.124e+02 2.919e+02, threshold=3.456e+02, percent-clipped=0.0 2023-03-27 01:08:24,213 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115072.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:08:33,766 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115086.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:08:45,009 INFO [finetune.py:976] (6/7) Epoch 21, batch 550, loss[loss=0.2157, simple_loss=0.2839, pruned_loss=0.07377, over 4860.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2444, pruned_loss=0.05247, over 897465.39 frames. ], batch size: 44, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:08:52,888 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8993, 1.9332, 1.5962, 2.0210, 2.3285, 2.1276, 1.6847, 1.5202], device='cuda:6'), covar=tensor([0.2104, 0.1791, 0.1857, 0.1529, 0.1607, 0.1082, 0.2295, 0.1760], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0212, 0.0194, 0.0242, 0.0187, 0.0217, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:09:14,150 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115147.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:09:18,263 INFO [finetune.py:976] (6/7) Epoch 21, batch 600, loss[loss=0.1609, simple_loss=0.2246, pruned_loss=0.04863, over 4836.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2431, pruned_loss=0.0519, over 910940.57 frames. ], batch size: 25, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:09:19,079 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 01:09:20,111 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.558e+02 1.835e+02 2.263e+02 4.639e+02, threshold=3.670e+02, percent-clipped=5.0 2023-03-27 01:09:32,488 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 01:09:41,575 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1480, 2.1688, 2.2557, 1.5420, 2.2300, 2.3408, 2.3975, 1.7928], device='cuda:6'), covar=tensor([0.0628, 0.0693, 0.0675, 0.0924, 0.0675, 0.0648, 0.0580, 0.1207], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0125, 0.0140, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:09:51,871 INFO [finetune.py:976] (6/7) Epoch 21, batch 650, loss[loss=0.1532, simple_loss=0.2268, pruned_loss=0.03977, over 4773.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2465, pruned_loss=0.05295, over 920677.72 frames. ], batch size: 27, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:10:01,553 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4436, 1.3577, 1.4791, 0.8261, 1.4803, 1.5037, 1.4974, 1.2716], device='cuda:6'), covar=tensor([0.0620, 0.0802, 0.0726, 0.0978, 0.0954, 0.0728, 0.0681, 0.1294], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0125, 0.0140, 0.0141, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:10:07,426 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115227.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:10:25,139 INFO [finetune.py:976] (6/7) Epoch 21, batch 700, loss[loss=0.1592, simple_loss=0.2349, pruned_loss=0.04179, over 4764.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2494, pruned_loss=0.05347, over 929823.63 frames. ], batch size: 28, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:10:26,912 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.664e+02 1.957e+02 2.283e+02 3.730e+02, threshold=3.914e+02, percent-clipped=1.0 2023-03-27 01:10:32,799 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.52 vs. limit=5.0 2023-03-27 01:10:58,917 INFO [finetune.py:976] (6/7) Epoch 21, batch 750, loss[loss=0.1635, simple_loss=0.2246, pruned_loss=0.05123, over 4712.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2498, pruned_loss=0.05377, over 933630.66 frames. ], batch size: 23, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:11:31,742 INFO [finetune.py:976] (6/7) Epoch 21, batch 800, loss[loss=0.1873, simple_loss=0.2601, pruned_loss=0.05722, over 4888.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2498, pruned_loss=0.05341, over 939361.20 frames. ], batch size: 32, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:11:33,560 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.112e+02 1.426e+02 1.720e+02 2.044e+02 3.360e+02, threshold=3.441e+02, percent-clipped=0.0 2023-03-27 01:11:41,522 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115370.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:11:42,121 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115371.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:11:42,713 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=115372.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:11:49,180 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5595, 1.5157, 1.3040, 1.4741, 1.8998, 1.7954, 1.5311, 1.3582], device='cuda:6'), covar=tensor([0.0372, 0.0311, 0.0614, 0.0322, 0.0215, 0.0454, 0.0361, 0.0417], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0109, 0.0146, 0.0113, 0.0101, 0.0112, 0.0101, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.6653e-05, 8.3669e-05, 1.1514e-04, 8.6645e-05, 7.8521e-05, 8.3039e-05, 7.5312e-05, 8.6898e-05], device='cuda:6') 2023-03-27 01:12:04,585 INFO [finetune.py:976] (6/7) Epoch 21, batch 850, loss[loss=0.1498, simple_loss=0.2256, pruned_loss=0.03696, over 4790.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2484, pruned_loss=0.05341, over 943781.34 frames. ], batch size: 29, lr: 3.21e-03, grad_scale: 32.0 2023-03-27 01:12:11,065 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.59 vs. limit=2.0 2023-03-27 01:12:13,716 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7201, 2.4678, 2.0931, 1.0761, 2.2102, 2.0475, 2.0312, 2.2951], device='cuda:6'), covar=tensor([0.0823, 0.0862, 0.1582, 0.2101, 0.1450, 0.2198, 0.2008, 0.0878], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0190, 0.0198, 0.0181, 0.0209, 0.0208, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:12:14,865 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=115420.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:16,192 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.55 vs. limit=5.0 2023-03-27 01:12:24,446 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115431.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:29,874 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:32,204 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4506, 2.3921, 2.0822, 1.1165, 2.1299, 1.9535, 1.8435, 2.1499], device='cuda:6'), covar=tensor([0.0825, 0.0697, 0.1446, 0.1935, 0.1388, 0.1949, 0.1888, 0.0928], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0190, 0.0198, 0.0181, 0.0209, 0.0209, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:12:42,479 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115442.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:12:54,664 INFO [finetune.py:976] (6/7) Epoch 21, batch 900, loss[loss=0.1426, simple_loss=0.2188, pruned_loss=0.03323, over 4946.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2454, pruned_loss=0.05252, over 945956.54 frames. ], batch size: 33, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:13:00,780 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.494e+02 1.776e+02 2.140e+02 4.219e+02, threshold=3.551e+02, percent-clipped=3.0 2023-03-27 01:13:37,349 INFO [finetune.py:976] (6/7) Epoch 21, batch 950, loss[loss=0.18, simple_loss=0.2463, pruned_loss=0.05691, over 4824.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2439, pruned_loss=0.05211, over 947779.36 frames. ], batch size: 30, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:13:43,084 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.74 vs. limit=2.0 2023-03-27 01:13:51,953 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=115527.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:14:11,292 INFO [finetune.py:976] (6/7) Epoch 21, batch 1000, loss[loss=0.1894, simple_loss=0.2669, pruned_loss=0.05595, over 4822.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2454, pruned_loss=0.05206, over 950100.65 frames. ], batch size: 39, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:14:13,112 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.049e+02 1.550e+02 1.849e+02 2.159e+02 3.452e+02, threshold=3.698e+02, percent-clipped=0.0 2023-03-27 01:14:24,470 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=115575.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:14:44,025 INFO [finetune.py:976] (6/7) Epoch 21, batch 1050, loss[loss=0.185, simple_loss=0.2573, pruned_loss=0.05632, over 4904.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2476, pruned_loss=0.05225, over 951776.38 frames. ], batch size: 37, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:15:16,670 INFO [finetune.py:976] (6/7) Epoch 21, batch 1100, loss[loss=0.1608, simple_loss=0.2205, pruned_loss=0.05054, over 4233.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2484, pruned_loss=0.05287, over 952879.92 frames. ], batch size: 18, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:15:19,446 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.043e+02 1.582e+02 1.822e+02 2.328e+02 4.675e+02, threshold=3.643e+02, percent-clipped=4.0 2023-03-27 01:15:20,508 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 01:15:50,439 INFO [finetune.py:976] (6/7) Epoch 21, batch 1150, loss[loss=0.1644, simple_loss=0.2242, pruned_loss=0.0523, over 4867.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2484, pruned_loss=0.05239, over 954516.50 frames. ], batch size: 31, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:16:05,314 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115726.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:05,917 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=115727.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:14,930 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=115742.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:15,570 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1114, 1.0077, 0.9769, 0.3932, 0.9588, 1.2144, 1.1985, 1.0009], device='cuda:6'), covar=tensor([0.0943, 0.0623, 0.0584, 0.0624, 0.0660, 0.0594, 0.0456, 0.0770], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0125, 0.0123, 0.0130, 0.0129, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9982e-05, 1.0796e-04, 8.9652e-05, 8.7279e-05, 9.1453e-05, 9.1930e-05, 1.0173e-04, 1.0609e-04], device='cuda:6') 2023-03-27 01:16:16,255 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 01:16:24,031 INFO [finetune.py:976] (6/7) Epoch 21, batch 1200, loss[loss=0.1584, simple_loss=0.2345, pruned_loss=0.04109, over 4821.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2474, pruned_loss=0.05255, over 955055.81 frames. ], batch size: 30, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:16:25,822 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.936e+01 1.465e+02 1.737e+02 2.048e+02 4.574e+02, threshold=3.475e+02, percent-clipped=2.0 2023-03-27 01:16:47,357 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=115790.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:16:56,822 INFO [finetune.py:976] (6/7) Epoch 21, batch 1250, loss[loss=0.1739, simple_loss=0.2375, pruned_loss=0.05514, over 4829.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2455, pruned_loss=0.05186, over 955979.06 frames. ], batch size: 39, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:17:13,349 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7068, 4.1150, 3.8218, 1.7621, 4.2696, 3.3190, 1.0944, 2.8354], device='cuda:6'), covar=tensor([0.2216, 0.2001, 0.1504, 0.3454, 0.0910, 0.0879, 0.4241, 0.1443], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0178, 0.0159, 0.0129, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 01:17:29,533 INFO [finetune.py:976] (6/7) Epoch 21, batch 1300, loss[loss=0.1264, simple_loss=0.2007, pruned_loss=0.02608, over 4903.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2414, pruned_loss=0.05007, over 956361.56 frames. ], batch size: 35, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:17:32,372 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.563e+02 1.759e+02 2.181e+02 4.124e+02, threshold=3.519e+02, percent-clipped=1.0 2023-03-27 01:17:52,266 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1986, 2.1609, 1.8406, 2.2072, 2.0895, 2.0070, 2.0500, 2.8951], device='cuda:6'), covar=tensor([0.3799, 0.4925, 0.3630, 0.4185, 0.4419, 0.2713, 0.4622, 0.1686], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0233, 0.0277, 0.0253, 0.0223, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:17:53,435 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7580, 1.6932, 2.1929, 3.5686, 2.4029, 2.3978, 0.9276, 3.0000], device='cuda:6'), covar=tensor([0.1771, 0.1460, 0.1530, 0.0594, 0.0820, 0.1532, 0.2126, 0.0428], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0100, 0.0136, 0.0124, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 01:18:12,140 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.24 vs. limit=5.0 2023-03-27 01:18:12,433 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=115892.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:18:23,979 INFO [finetune.py:976] (6/7) Epoch 21, batch 1350, loss[loss=0.1682, simple_loss=0.2359, pruned_loss=0.05021, over 4897.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.241, pruned_loss=0.05022, over 957707.00 frames. ], batch size: 35, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:18:56,327 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4274, 1.3262, 1.2995, 1.3259, 0.7566, 2.2201, 0.6688, 1.1611], device='cuda:6'), covar=tensor([0.3366, 0.2607, 0.2309, 0.2625, 0.2145, 0.0377, 0.2837, 0.1399], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0115, 0.0120, 0.0122, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:19:00,571 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=115953.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 01:19:01,043 INFO [finetune.py:976] (6/7) Epoch 21, batch 1400, loss[loss=0.1703, simple_loss=0.2547, pruned_loss=0.04296, over 4836.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2453, pruned_loss=0.0516, over 957000.38 frames. ], batch size: 30, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:19:02,861 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.541e+02 1.818e+02 2.071e+02 3.575e+02, threshold=3.635e+02, percent-clipped=1.0 2023-03-27 01:19:09,284 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 01:19:27,709 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0480, 1.8886, 1.6502, 1.8042, 1.8358, 1.8163, 1.9035, 2.5779], device='cuda:6'), covar=tensor([0.3423, 0.3999, 0.3225, 0.3838, 0.3976, 0.2283, 0.3691, 0.1533], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0261, 0.0231, 0.0275, 0.0251, 0.0221, 0.0251, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:19:35,503 INFO [finetune.py:976] (6/7) Epoch 21, batch 1450, loss[loss=0.1872, simple_loss=0.2571, pruned_loss=0.05861, over 4799.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.248, pruned_loss=0.05204, over 956410.82 frames. ], batch size: 25, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:19:43,013 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116014.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:19:51,747 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116026.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:19:52,340 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:00,700 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4917, 1.5460, 1.6720, 0.9289, 1.6166, 1.8212, 1.8316, 1.4801], device='cuda:6'), covar=tensor([0.0871, 0.0667, 0.0526, 0.0520, 0.0584, 0.0675, 0.0434, 0.0678], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0125, 0.0123, 0.0130, 0.0128, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9918e-05, 1.0780e-04, 8.9473e-05, 8.7318e-05, 9.1146e-05, 9.1593e-05, 1.0139e-04, 1.0594e-04], device='cuda:6') 2023-03-27 01:20:09,070 INFO [finetune.py:976] (6/7) Epoch 21, batch 1500, loss[loss=0.2349, simple_loss=0.2984, pruned_loss=0.08572, over 4735.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2497, pruned_loss=0.05303, over 954903.88 frames. ], batch size: 59, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:20:10,890 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.231e+02 1.719e+02 2.058e+02 2.312e+02 4.180e+02, threshold=4.116e+02, percent-clipped=2.0 2023-03-27 01:20:11,866 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.45 vs. limit=5.0 2023-03-27 01:20:23,170 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116074.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:23,792 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:23,859 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116075.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 01:20:31,503 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1399, 1.7165, 2.2228, 2.1256, 1.8508, 1.8808, 2.0226, 2.0620], device='cuda:6'), covar=tensor([0.4200, 0.4526, 0.3412, 0.4283, 0.5289, 0.4038, 0.5158, 0.3213], device='cuda:6'), in_proj_covar=tensor([0.0253, 0.0242, 0.0263, 0.0281, 0.0280, 0.0256, 0.0290, 0.0245], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:20:32,632 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4882, 1.4189, 1.9286, 1.8354, 1.5717, 3.3192, 1.3463, 1.5512], device='cuda:6'), covar=tensor([0.0957, 0.1797, 0.1122, 0.0946, 0.1574, 0.0228, 0.1514, 0.1738], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0075, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:20:36,117 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1157, 1.8996, 1.7007, 1.8756, 1.8166, 1.8090, 1.8865, 2.6102], device='cuda:6'), covar=tensor([0.3216, 0.4156, 0.3209, 0.3649, 0.4144, 0.2328, 0.3557, 0.1544], device='cuda:6'), in_proj_covar=tensor([0.0286, 0.0261, 0.0232, 0.0275, 0.0252, 0.0222, 0.0252, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:20:40,965 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2551, 2.1586, 2.1656, 1.5586, 2.1163, 2.2681, 2.3117, 1.8059], device='cuda:6'), covar=tensor([0.0526, 0.0624, 0.0661, 0.0811, 0.0700, 0.0670, 0.0542, 0.1067], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0135, 0.0139, 0.0120, 0.0125, 0.0138, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:20:42,656 INFO [finetune.py:976] (6/7) Epoch 21, batch 1550, loss[loss=0.1449, simple_loss=0.2106, pruned_loss=0.03963, over 4350.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2488, pruned_loss=0.05287, over 955667.13 frames. ], batch size: 18, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:20:43,381 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:20:50,850 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116116.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:21:15,951 INFO [finetune.py:976] (6/7) Epoch 21, batch 1600, loss[loss=0.1506, simple_loss=0.2223, pruned_loss=0.03944, over 4858.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2462, pruned_loss=0.05213, over 954641.65 frames. ], batch size: 31, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:21:17,776 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.061e+01 1.555e+02 1.841e+02 2.223e+02 4.654e+02, threshold=3.683e+02, percent-clipped=1.0 2023-03-27 01:21:23,357 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116166.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:21:31,955 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116177.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:21:49,854 INFO [finetune.py:976] (6/7) Epoch 21, batch 1650, loss[loss=0.2152, simple_loss=0.2705, pruned_loss=0.0799, over 4716.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2438, pruned_loss=0.05154, over 954219.11 frames. ], batch size: 23, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:21:56,594 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8025, 3.3268, 3.4915, 3.6449, 3.5821, 3.3734, 3.8943, 1.1791], device='cuda:6'), covar=tensor([0.0940, 0.0881, 0.0977, 0.1119, 0.1399, 0.1606, 0.0896, 0.5396], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0238, 0.0276, 0.0288, 0.0329, 0.0281, 0.0299, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:21:59,784 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 01:22:15,960 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-27 01:22:19,470 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116248.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:22:23,483 INFO [finetune.py:976] (6/7) Epoch 21, batch 1700, loss[loss=0.1901, simple_loss=0.2593, pruned_loss=0.06048, over 4907.00 frames. ], tot_loss[loss=0.172, simple_loss=0.242, pruned_loss=0.05101, over 957358.81 frames. ], batch size: 43, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:22:25,326 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.524e+01 1.502e+02 1.774e+02 2.116e+02 3.203e+02, threshold=3.548e+02, percent-clipped=0.0 2023-03-27 01:22:55,917 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 01:22:59,246 INFO [finetune.py:976] (6/7) Epoch 21, batch 1750, loss[loss=0.2596, simple_loss=0.3296, pruned_loss=0.09484, over 4261.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2456, pruned_loss=0.05275, over 954534.70 frames. ], batch size: 66, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:23:19,662 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116323.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:23:30,989 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4666, 1.3752, 1.3168, 1.5019, 0.9884, 3.1234, 1.1906, 1.5738], device='cuda:6'), covar=tensor([0.3244, 0.2495, 0.2182, 0.2277, 0.1901, 0.0218, 0.2827, 0.1282], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:23:58,749 INFO [finetune.py:976] (6/7) Epoch 21, batch 1800, loss[loss=0.1504, simple_loss=0.2149, pruned_loss=0.04294, over 4202.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2474, pruned_loss=0.05343, over 950243.80 frames. ], batch size: 18, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:24:00,588 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.248e+02 1.627e+02 1.938e+02 2.363e+02 5.057e+02, threshold=3.876e+02, percent-clipped=3.0 2023-03-27 01:24:04,028 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 01:24:08,549 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116370.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:24:18,614 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116384.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:24:30,112 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 01:24:31,789 INFO [finetune.py:976] (6/7) Epoch 21, batch 1850, loss[loss=0.1715, simple_loss=0.2372, pruned_loss=0.0529, over 4781.00 frames. ], tot_loss[loss=0.1798, simple_loss=0.2502, pruned_loss=0.05467, over 952000.18 frames. ], batch size: 25, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:24:34,764 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8965, 3.3796, 3.5863, 3.7570, 3.6789, 3.4144, 3.9518, 1.2994], device='cuda:6'), covar=tensor([0.0920, 0.0931, 0.1005, 0.0974, 0.1392, 0.1758, 0.0852, 0.5486], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0239, 0.0277, 0.0288, 0.0329, 0.0282, 0.0300, 0.0295], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:24:59,654 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1257, 1.9687, 2.6348, 1.7353, 2.2568, 2.3658, 1.7607, 2.4644], device='cuda:6'), covar=tensor([0.1294, 0.1818, 0.1480, 0.1922, 0.0887, 0.1325, 0.2591, 0.0797], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0204, 0.0190, 0.0189, 0.0173, 0.0213, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:25:00,255 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116446.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:05,405 INFO [finetune.py:976] (6/7) Epoch 21, batch 1900, loss[loss=0.1315, simple_loss=0.2144, pruned_loss=0.0243, over 4780.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.251, pruned_loss=0.05473, over 952635.64 frames. ], batch size: 25, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:25:07,229 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.037e+02 1.572e+02 1.880e+02 2.123e+02 3.861e+02, threshold=3.760e+02, percent-clipped=0.0 2023-03-27 01:25:10,210 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116461.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:16,987 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116472.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:35,765 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 01:25:38,738 INFO [finetune.py:976] (6/7) Epoch 21, batch 1950, loss[loss=0.1766, simple_loss=0.2458, pruned_loss=0.05367, over 4817.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2497, pruned_loss=0.05444, over 951428.77 frames. ], batch size: 41, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:25:39,463 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116505.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:25:40,694 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116507.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:26:07,965 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116548.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:26:11,459 INFO [finetune.py:976] (6/7) Epoch 21, batch 2000, loss[loss=0.1869, simple_loss=0.252, pruned_loss=0.06093, over 4909.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2466, pruned_loss=0.05355, over 950493.88 frames. ], batch size: 37, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:26:13,785 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.123e+01 1.433e+02 1.690e+02 2.067e+02 3.885e+02, threshold=3.380e+02, percent-clipped=2.0 2023-03-27 01:26:13,939 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0850, 1.9544, 1.6940, 1.8570, 2.0436, 1.7618, 2.2958, 2.0589], device='cuda:6'), covar=tensor([0.1334, 0.1897, 0.2938, 0.2339, 0.2576, 0.1729, 0.2783, 0.1766], device='cuda:6'), in_proj_covar=tensor([0.0185, 0.0187, 0.0235, 0.0252, 0.0247, 0.0203, 0.0213, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:26:19,750 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116566.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:26:39,353 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116596.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:26:44,684 INFO [finetune.py:976] (6/7) Epoch 21, batch 2050, loss[loss=0.1929, simple_loss=0.2574, pruned_loss=0.06425, over 4872.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2433, pruned_loss=0.05237, over 952654.54 frames. ], batch size: 31, lr: 3.20e-03, grad_scale: 64.0 2023-03-27 01:26:54,224 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116618.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:27:18,446 INFO [finetune.py:976] (6/7) Epoch 21, batch 2100, loss[loss=0.2124, simple_loss=0.2744, pruned_loss=0.07522, over 4932.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2432, pruned_loss=0.05217, over 953436.47 frames. ], batch size: 33, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:27:20,847 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.617e+02 1.783e+02 2.198e+02 6.495e+02, threshold=3.567e+02, percent-clipped=4.0 2023-03-27 01:27:29,071 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116670.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 01:27:34,427 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116679.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:27:34,490 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116679.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:27:51,925 INFO [finetune.py:976] (6/7) Epoch 21, batch 2150, loss[loss=0.1597, simple_loss=0.2377, pruned_loss=0.04079, over 4822.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2466, pruned_loss=0.05327, over 954316.39 frames. ], batch size: 33, lr: 3.20e-03, grad_scale: 32.0 2023-03-27 01:27:52,053 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:00,963 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116718.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:03,969 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7464, 1.7057, 1.5606, 1.6404, 2.0629, 1.9830, 1.7385, 1.5503], device='cuda:6'), covar=tensor([0.0343, 0.0307, 0.0594, 0.0373, 0.0233, 0.0576, 0.0355, 0.0429], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0108, 0.0145, 0.0113, 0.0100, 0.0111, 0.0100, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.6104e-05, 8.2543e-05, 1.1412e-04, 8.6567e-05, 7.8102e-05, 8.2313e-05, 7.4597e-05, 8.6572e-05], device='cuda:6') 2023-03-27 01:28:26,697 INFO [finetune.py:976] (6/7) Epoch 21, batch 2200, loss[loss=0.2052, simple_loss=0.2725, pruned_loss=0.06893, over 4224.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2486, pruned_loss=0.05334, over 954532.42 frames. ], batch size: 65, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:28:30,715 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.107e+02 1.636e+02 2.054e+02 2.505e+02 6.138e+02, threshold=4.108e+02, percent-clipped=5.0 2023-03-27 01:28:32,686 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116761.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:34,580 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 01:28:39,668 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116765.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:28:44,486 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116772.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:29:19,248 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116802.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:29:19,276 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=116802.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:29:20,385 INFO [finetune.py:976] (6/7) Epoch 21, batch 2250, loss[loss=0.1784, simple_loss=0.2549, pruned_loss=0.05097, over 4807.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2491, pruned_loss=0.05351, over 954734.00 frames. ], batch size: 40, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:29:28,970 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116809.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:29:39,991 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=116820.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:29:52,887 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.19 vs. limit=5.0 2023-03-27 01:29:53,303 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8487, 1.6640, 1.4721, 1.2633, 1.6279, 1.6301, 1.6165, 2.2208], device='cuda:6'), covar=tensor([0.3839, 0.3792, 0.3064, 0.3687, 0.3862, 0.2218, 0.3268, 0.1662], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0232, 0.0277, 0.0253, 0.0223, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:30:01,468 INFO [finetune.py:976] (6/7) Epoch 21, batch 2300, loss[loss=0.2242, simple_loss=0.2822, pruned_loss=0.08311, over 4868.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2496, pruned_loss=0.05366, over 955237.86 frames. ], batch size: 31, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:30:04,886 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.160e+02 1.477e+02 1.740e+02 2.172e+02 4.454e+02, threshold=3.479e+02, percent-clipped=1.0 2023-03-27 01:30:06,812 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116861.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:30:08,565 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=116863.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:30:13,131 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 01:30:25,394 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 01:30:34,137 INFO [finetune.py:976] (6/7) Epoch 21, batch 2350, loss[loss=0.1837, simple_loss=0.2594, pruned_loss=0.05404, over 4911.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2473, pruned_loss=0.05319, over 954705.15 frames. ], batch size: 46, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:30:40,649 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-27 01:31:07,404 INFO [finetune.py:976] (6/7) Epoch 21, batch 2400, loss[loss=0.1694, simple_loss=0.2361, pruned_loss=0.05136, over 4871.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2455, pruned_loss=0.0525, over 956738.85 frames. ], batch size: 49, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:31:09,766 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.566e+02 1.863e+02 2.219e+02 3.648e+02, threshold=3.726e+02, percent-clipped=1.0 2023-03-27 01:31:21,639 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=116974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:31:25,194 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=116979.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:31:29,388 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2520, 2.0179, 1.7494, 1.9041, 1.9802, 1.9256, 1.9680, 2.7524], device='cuda:6'), covar=tensor([0.3812, 0.4312, 0.3451, 0.3994, 0.4175, 0.2624, 0.4100, 0.1697], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0232, 0.0277, 0.0253, 0.0223, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:31:40,995 INFO [finetune.py:976] (6/7) Epoch 21, batch 2450, loss[loss=0.1815, simple_loss=0.2473, pruned_loss=0.05785, over 4747.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2424, pruned_loss=0.05168, over 953741.76 frames. ], batch size: 59, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:31:57,424 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:32:02,821 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0023, 1.9575, 1.7401, 2.2109, 2.5351, 2.1960, 1.9209, 1.6389], device='cuda:6'), covar=tensor([0.2121, 0.1900, 0.1862, 0.1582, 0.1684, 0.1104, 0.2122, 0.1853], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0214, 0.0195, 0.0243, 0.0188, 0.0218, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:32:14,704 INFO [finetune.py:976] (6/7) Epoch 21, batch 2500, loss[loss=0.2128, simple_loss=0.2795, pruned_loss=0.07306, over 4812.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2437, pruned_loss=0.05208, over 955881.26 frames. ], batch size: 41, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:32:17,113 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.510e+02 1.797e+02 2.340e+02 3.968e+02, threshold=3.593e+02, percent-clipped=1.0 2023-03-27 01:32:18,389 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117060.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:32:26,054 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.55 vs. limit=2.0 2023-03-27 01:32:44,590 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-27 01:32:46,785 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117102.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:32:47,904 INFO [finetune.py:976] (6/7) Epoch 21, batch 2550, loss[loss=0.2087, simple_loss=0.2738, pruned_loss=0.07183, over 4754.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2465, pruned_loss=0.05272, over 956990.01 frames. ], batch size: 28, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:33:06,962 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8402, 1.8515, 1.6212, 1.9699, 2.4190, 2.1083, 1.8678, 1.5347], device='cuda:6'), covar=tensor([0.2283, 0.1833, 0.1855, 0.1633, 0.1581, 0.1126, 0.2025, 0.1881], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0210, 0.0213, 0.0195, 0.0243, 0.0188, 0.0218, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:33:19,399 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117150.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:33:21,773 INFO [finetune.py:976] (6/7) Epoch 21, batch 2600, loss[loss=0.1803, simple_loss=0.256, pruned_loss=0.05228, over 4924.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2476, pruned_loss=0.05289, over 957339.57 frames. ], batch size: 33, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:33:24,216 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.533e+02 1.830e+02 2.226e+02 4.351e+02, threshold=3.661e+02, percent-clipped=3.0 2023-03-27 01:33:24,308 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117158.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:33:26,127 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117161.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:33:58,435 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3254, 1.2835, 1.1945, 1.3573, 1.5921, 1.4516, 1.3005, 1.1763], device='cuda:6'), covar=tensor([0.0350, 0.0284, 0.0644, 0.0277, 0.0225, 0.0519, 0.0331, 0.0417], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0107, 0.0143, 0.0112, 0.0099, 0.0110, 0.0099, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.5590e-05, 8.1880e-05, 1.1283e-04, 8.5707e-05, 7.6901e-05, 8.1333e-05, 7.4059e-05, 8.5788e-05], device='cuda:6') 2023-03-27 01:34:06,273 INFO [finetune.py:976] (6/7) Epoch 21, batch 2650, loss[loss=0.1743, simple_loss=0.2489, pruned_loss=0.04982, over 4907.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2481, pruned_loss=0.05315, over 955771.02 frames. ], batch size: 37, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:34:09,391 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117209.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:34:26,096 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=117221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:35:03,766 INFO [finetune.py:976] (6/7) Epoch 21, batch 2700, loss[loss=0.1493, simple_loss=0.2181, pruned_loss=0.04022, over 4782.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.248, pruned_loss=0.05269, over 958306.76 frames. ], batch size: 29, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:35:06,202 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.522e+02 1.732e+02 2.127e+02 4.053e+02, threshold=3.464e+02, percent-clipped=3.0 2023-03-27 01:35:23,765 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117274.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:35:30,637 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=117282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:35:45,281 INFO [finetune.py:976] (6/7) Epoch 21, batch 2750, loss[loss=0.1727, simple_loss=0.2418, pruned_loss=0.05182, over 4895.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2456, pruned_loss=0.05189, over 956661.52 frames. ], batch size: 32, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:35:56,248 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117322.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:01,574 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=117329.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:18,643 INFO [finetune.py:976] (6/7) Epoch 21, batch 2800, loss[loss=0.1566, simple_loss=0.2243, pruned_loss=0.0444, over 4824.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2433, pruned_loss=0.05133, over 957162.98 frames. ], batch size: 40, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:36:19,824 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1264, 1.9865, 1.5255, 0.6383, 1.7390, 1.8102, 1.6459, 1.8760], device='cuda:6'), covar=tensor([0.0743, 0.0632, 0.1223, 0.1719, 0.1022, 0.1645, 0.1916, 0.0692], device='cuda:6'), in_proj_covar=tensor([0.0167, 0.0189, 0.0195, 0.0180, 0.0206, 0.0206, 0.0219, 0.0192], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:36:21,561 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.495e+02 1.752e+02 2.115e+02 2.888e+02, threshold=3.503e+02, percent-clipped=0.0 2023-03-27 01:36:22,887 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117360.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:42,955 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=117390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:36:52,479 INFO [finetune.py:976] (6/7) Epoch 21, batch 2850, loss[loss=0.1585, simple_loss=0.2283, pruned_loss=0.04442, over 4362.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.242, pruned_loss=0.05118, over 956358.00 frames. ], batch size: 19, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:36:54,971 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117408.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:37:25,543 INFO [finetune.py:976] (6/7) Epoch 21, batch 2900, loss[loss=0.1632, simple_loss=0.2465, pruned_loss=0.04, over 4794.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2457, pruned_loss=0.0528, over 951853.69 frames. ], batch size: 54, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:37:28,391 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.554e+02 1.875e+02 2.295e+02 6.888e+02, threshold=3.749e+02, percent-clipped=2.0 2023-03-27 01:37:28,496 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117458.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:37:29,624 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1564, 1.2879, 1.3252, 0.7072, 1.2798, 1.5254, 1.5783, 1.2137], device='cuda:6'), covar=tensor([0.0947, 0.0620, 0.0594, 0.0526, 0.0531, 0.0603, 0.0343, 0.0672], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0151, 0.0126, 0.0124, 0.0131, 0.0129, 0.0142, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.0386e-05, 1.0889e-04, 9.0144e-05, 8.7870e-05, 9.1995e-05, 9.1990e-05, 1.0172e-04, 1.0664e-04], device='cuda:6') 2023-03-27 01:37:37,528 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7423, 1.6443, 1.6406, 1.6617, 1.3433, 3.3735, 1.3731, 1.9227], device='cuda:6'), covar=tensor([0.3315, 0.2371, 0.2050, 0.2338, 0.1689, 0.0216, 0.2558, 0.1181], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:37:48,023 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 01:37:54,617 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1329, 1.9341, 1.7794, 2.1079, 2.5989, 2.1405, 1.9995, 1.6321], device='cuda:6'), covar=tensor([0.2201, 0.1922, 0.1799, 0.1679, 0.1817, 0.1168, 0.2190, 0.1910], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0195, 0.0243, 0.0188, 0.0218, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:37:59,200 INFO [finetune.py:976] (6/7) Epoch 21, batch 2950, loss[loss=0.192, simple_loss=0.2574, pruned_loss=0.06327, over 4818.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.248, pruned_loss=0.05325, over 950733.19 frames. ], batch size: 33, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:38:00,498 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:38:27,529 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2378, 4.4678, 4.7128, 5.0901, 4.9542, 4.6638, 5.3273, 1.4939], device='cuda:6'), covar=tensor([0.0698, 0.0930, 0.0764, 0.0763, 0.1234, 0.1645, 0.0515, 0.6131], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0240, 0.0278, 0.0290, 0.0331, 0.0282, 0.0301, 0.0297], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:38:32,247 INFO [finetune.py:976] (6/7) Epoch 21, batch 3000, loss[loss=0.1549, simple_loss=0.2325, pruned_loss=0.03862, over 4778.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2484, pruned_loss=0.05303, over 952828.76 frames. ], batch size: 28, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:38:32,247 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 01:38:42,800 INFO [finetune.py:1010] (6/7) Epoch 21, validation: loss=0.1567, simple_loss=0.2253, pruned_loss=0.04408, over 2265189.00 frames. 2023-03-27 01:38:42,800 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6385MB 2023-03-27 01:38:45,675 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.534e+02 1.924e+02 2.362e+02 3.621e+02, threshold=3.849e+02, percent-clipped=0.0 2023-03-27 01:39:00,155 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117577.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:39:10,592 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0488, 1.4513, 1.9438, 1.9810, 1.7860, 1.7630, 1.8616, 1.8372], device='cuda:6'), covar=tensor([0.4057, 0.4303, 0.3428, 0.3945, 0.5155, 0.4033, 0.5010, 0.3406], device='cuda:6'), in_proj_covar=tensor([0.0255, 0.0243, 0.0264, 0.0282, 0.0280, 0.0256, 0.0291, 0.0245], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:39:11,461 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-27 01:39:17,642 INFO [finetune.py:976] (6/7) Epoch 21, batch 3050, loss[loss=0.1608, simple_loss=0.2488, pruned_loss=0.0364, over 4717.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2501, pruned_loss=0.05378, over 954830.61 frames. ], batch size: 59, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:40:10,143 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8602, 3.4056, 3.5192, 3.7500, 3.6223, 3.3943, 3.9232, 1.1809], device='cuda:6'), covar=tensor([0.0852, 0.0898, 0.0976, 0.0998, 0.1373, 0.1674, 0.0848, 0.5750], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0239, 0.0277, 0.0289, 0.0330, 0.0281, 0.0300, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:40:13,668 INFO [finetune.py:976] (6/7) Epoch 21, batch 3100, loss[loss=0.1547, simple_loss=0.2152, pruned_loss=0.04716, over 4819.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2481, pruned_loss=0.05292, over 954807.50 frames. ], batch size: 30, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:40:19,670 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.886e+01 1.482e+02 1.759e+02 2.208e+02 4.258e+02, threshold=3.518e+02, percent-clipped=1.0 2023-03-27 01:40:43,282 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 01:40:46,799 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=117685.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:40:54,477 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.64 vs. limit=5.0 2023-03-27 01:40:58,310 INFO [finetune.py:976] (6/7) Epoch 21, batch 3150, loss[loss=0.1287, simple_loss=0.2133, pruned_loss=0.0221, over 4822.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2457, pruned_loss=0.05207, over 956610.72 frames. ], batch size: 51, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:41:10,048 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4636, 2.3562, 2.7915, 1.5329, 2.3749, 2.6433, 1.9685, 2.8531], device='cuda:6'), covar=tensor([0.1391, 0.1797, 0.1452, 0.2361, 0.0958, 0.1420, 0.2652, 0.0821], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0204, 0.0191, 0.0190, 0.0174, 0.0213, 0.0218, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:41:31,636 INFO [finetune.py:976] (6/7) Epoch 21, batch 3200, loss[loss=0.1758, simple_loss=0.2507, pruned_loss=0.05043, over 4811.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2431, pruned_loss=0.05134, over 954972.83 frames. ], batch size: 38, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:41:34,036 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.569e+02 1.801e+02 2.101e+02 4.822e+02, threshold=3.602e+02, percent-clipped=2.0 2023-03-27 01:42:05,172 INFO [finetune.py:976] (6/7) Epoch 21, batch 3250, loss[loss=0.1736, simple_loss=0.2514, pruned_loss=0.04785, over 4829.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2432, pruned_loss=0.05154, over 955758.51 frames. ], batch size: 39, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:42:38,401 INFO [finetune.py:976] (6/7) Epoch 21, batch 3300, loss[loss=0.252, simple_loss=0.3159, pruned_loss=0.09403, over 4849.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2471, pruned_loss=0.05223, over 957157.64 frames. ], batch size: 44, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:42:40,847 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.638e+02 1.917e+02 2.241e+02 9.038e+02, threshold=3.833e+02, percent-clipped=2.0 2023-03-27 01:42:54,838 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117877.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:42:54,935 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.41 vs. limit=5.0 2023-03-27 01:43:11,571 INFO [finetune.py:976] (6/7) Epoch 21, batch 3350, loss[loss=0.1663, simple_loss=0.242, pruned_loss=0.04528, over 4761.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2492, pruned_loss=0.05274, over 955736.08 frames. ], batch size: 27, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:43:16,873 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 01:43:22,742 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-27 01:43:25,779 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=117925.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:43:28,083 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9170, 5.0421, 4.7101, 2.6288, 5.1573, 4.0805, 1.0942, 3.6188], device='cuda:6'), covar=tensor([0.2163, 0.1798, 0.1119, 0.2892, 0.0676, 0.0744, 0.4416, 0.1222], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0178, 0.0158, 0.0130, 0.0161, 0.0122, 0.0147, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 01:43:45,044 INFO [finetune.py:976] (6/7) Epoch 21, batch 3400, loss[loss=0.1973, simple_loss=0.2756, pruned_loss=0.05947, over 4858.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2495, pruned_loss=0.05319, over 954581.05 frames. ], batch size: 44, lr: 3.19e-03, grad_scale: 32.0 2023-03-27 01:43:47,450 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.189e+02 1.619e+02 1.880e+02 2.233e+02 5.629e+02, threshold=3.761e+02, percent-clipped=2.0 2023-03-27 01:44:05,862 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=117985.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:44:19,699 INFO [finetune.py:976] (6/7) Epoch 21, batch 3450, loss[loss=0.1764, simple_loss=0.2389, pruned_loss=0.05694, over 4785.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2487, pruned_loss=0.05292, over 953933.37 frames. ], batch size: 29, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:44:24,703 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5953, 1.4710, 1.3306, 1.4309, 1.8039, 1.7409, 1.5384, 1.3498], device='cuda:6'), covar=tensor([0.0341, 0.0332, 0.0688, 0.0355, 0.0218, 0.0465, 0.0361, 0.0461], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0107, 0.0144, 0.0112, 0.0099, 0.0110, 0.0100, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.5750e-05, 8.1931e-05, 1.1312e-04, 8.5737e-05, 7.7186e-05, 8.1068e-05, 7.4765e-05, 8.5812e-05], device='cuda:6') 2023-03-27 01:44:39,324 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118033.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:44:44,150 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118040.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:44:59,765 INFO [finetune.py:976] (6/7) Epoch 21, batch 3500, loss[loss=0.1617, simple_loss=0.2295, pruned_loss=0.04694, over 4812.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2463, pruned_loss=0.05263, over 954368.44 frames. ], batch size: 51, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:45:02,213 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.127e+01 1.501e+02 1.833e+02 2.184e+02 3.839e+02, threshold=3.666e+02, percent-clipped=2.0 2023-03-27 01:45:19,428 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 01:45:52,338 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118101.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:45:57,919 INFO [finetune.py:976] (6/7) Epoch 21, batch 3550, loss[loss=0.1587, simple_loss=0.2337, pruned_loss=0.04185, over 4794.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2444, pruned_loss=0.05249, over 954064.59 frames. ], batch size: 29, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:46:30,089 INFO [finetune.py:976] (6/7) Epoch 21, batch 3600, loss[loss=0.1917, simple_loss=0.2586, pruned_loss=0.06245, over 4823.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2429, pruned_loss=0.05274, over 954851.77 frames. ], batch size: 41, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:46:33,039 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.558e+02 1.902e+02 2.180e+02 3.976e+02, threshold=3.804e+02, percent-clipped=2.0 2023-03-27 01:46:39,823 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118168.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:47:03,625 INFO [finetune.py:976] (6/7) Epoch 21, batch 3650, loss[loss=0.2063, simple_loss=0.2784, pruned_loss=0.06705, over 4805.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2455, pruned_loss=0.05343, over 955040.63 frames. ], batch size: 41, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:47:19,813 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118229.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 01:47:36,722 INFO [finetune.py:976] (6/7) Epoch 21, batch 3700, loss[loss=0.1771, simple_loss=0.2625, pruned_loss=0.04581, over 4820.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2473, pruned_loss=0.05368, over 951345.21 frames. ], batch size: 40, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:47:38,309 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 01:47:39,056 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.607e+02 1.941e+02 2.377e+02 3.454e+02, threshold=3.882e+02, percent-clipped=0.0 2023-03-27 01:47:46,805 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118269.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:00,450 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118289.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:10,300 INFO [finetune.py:976] (6/7) Epoch 21, batch 3750, loss[loss=0.1926, simple_loss=0.2644, pruned_loss=0.06035, over 4824.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2485, pruned_loss=0.05364, over 951248.09 frames. ], batch size: 33, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:48:10,479 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 01:48:26,328 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118329.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:26,961 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118330.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:41,434 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:48:43,714 INFO [finetune.py:976] (6/7) Epoch 21, batch 3800, loss[loss=0.186, simple_loss=0.2667, pruned_loss=0.05264, over 4817.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2493, pruned_loss=0.05353, over 951463.24 frames. ], batch size: 39, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:48:46,092 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.364e+01 1.561e+02 1.815e+02 2.293e+02 4.441e+02, threshold=3.631e+02, percent-clipped=1.0 2023-03-27 01:48:49,251 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5868, 1.4747, 1.4457, 1.4808, 1.1039, 3.4287, 1.3161, 1.7684], device='cuda:6'), covar=tensor([0.3134, 0.2437, 0.2062, 0.2291, 0.1730, 0.0197, 0.2848, 0.1236], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0113, 0.0096, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:49:06,616 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:49:11,214 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118396.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:49:17,071 INFO [finetune.py:976] (6/7) Epoch 21, batch 3850, loss[loss=0.1763, simple_loss=0.2518, pruned_loss=0.05043, over 4905.00 frames. ], tot_loss[loss=0.1777, simple_loss=0.2489, pruned_loss=0.05328, over 953072.44 frames. ], batch size: 43, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:49:18,930 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4696, 3.2991, 3.1984, 1.4865, 3.4587, 2.6638, 0.6922, 2.3586], device='cuda:6'), covar=tensor([0.2473, 0.1917, 0.1740, 0.3348, 0.1221, 0.0964, 0.4391, 0.1483], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0178, 0.0158, 0.0130, 0.0161, 0.0122, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 01:49:24,432 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7598, 1.6702, 1.5843, 1.6607, 1.4003, 3.6667, 1.5084, 2.0360], device='cuda:6'), covar=tensor([0.3093, 0.2376, 0.2004, 0.2309, 0.1494, 0.0175, 0.2655, 0.1144], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 01:49:50,289 INFO [finetune.py:976] (6/7) Epoch 21, batch 3900, loss[loss=0.1495, simple_loss=0.2181, pruned_loss=0.04045, over 4770.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2456, pruned_loss=0.05198, over 953139.79 frames. ], batch size: 23, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:49:52,686 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.530e+02 1.819e+02 2.327e+02 4.856e+02, threshold=3.639e+02, percent-clipped=2.0 2023-03-27 01:50:25,013 INFO [finetune.py:976] (6/7) Epoch 21, batch 3950, loss[loss=0.1841, simple_loss=0.2563, pruned_loss=0.05595, over 4915.00 frames. ], tot_loss[loss=0.173, simple_loss=0.243, pruned_loss=0.05144, over 956512.29 frames. ], batch size: 37, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:50:48,089 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118524.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:50:52,804 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5196, 2.5332, 2.6333, 1.4438, 2.9017, 3.1248, 2.7734, 2.4452], device='cuda:6'), covar=tensor([0.0886, 0.0654, 0.0493, 0.0723, 0.0626, 0.0672, 0.0417, 0.0646], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0126, 0.0123, 0.0131, 0.0129, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([9.0104e-05, 1.0829e-04, 9.0367e-05, 8.7164e-05, 9.2108e-05, 9.1682e-05, 1.0131e-04, 1.0617e-04], device='cuda:6') 2023-03-27 01:51:09,145 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 01:51:19,654 INFO [finetune.py:976] (6/7) Epoch 21, batch 4000, loss[loss=0.2202, simple_loss=0.2988, pruned_loss=0.07078, over 4855.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2411, pruned_loss=0.05067, over 954638.79 frames. ], batch size: 44, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:51:26,595 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.742e+01 1.543e+02 1.809e+02 2.201e+02 4.154e+02, threshold=3.618e+02, percent-clipped=2.0 2023-03-27 01:51:48,833 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1728, 2.2236, 1.7989, 2.3101, 2.1048, 2.0664, 2.1088, 2.8783], device='cuda:6'), covar=tensor([0.3874, 0.4997, 0.3600, 0.4310, 0.4542, 0.2609, 0.4411, 0.1767], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0233, 0.0277, 0.0254, 0.0224, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:51:56,640 INFO [finetune.py:976] (6/7) Epoch 21, batch 4050, loss[loss=0.1995, simple_loss=0.2746, pruned_loss=0.06217, over 4814.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2447, pruned_loss=0.05228, over 954728.67 frames. ], batch size: 39, lr: 3.18e-03, grad_scale: 32.0 2023-03-27 01:52:11,503 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118625.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:11,673 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-27 01:52:15,705 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=118631.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:24,645 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118645.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:30,075 INFO [finetune.py:976] (6/7) Epoch 21, batch 4100, loss[loss=0.152, simple_loss=0.2208, pruned_loss=0.04158, over 4783.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2476, pruned_loss=0.05329, over 951752.68 frames. ], batch size: 26, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:52:33,510 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.828e+01 1.601e+02 1.824e+02 2.338e+02 3.980e+02, threshold=3.647e+02, percent-clipped=1.0 2023-03-27 01:52:51,562 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118685.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:56,342 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=118692.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:52:58,685 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118696.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:53:03,515 INFO [finetune.py:976] (6/7) Epoch 21, batch 4150, loss[loss=0.1731, simple_loss=0.2429, pruned_loss=0.0516, over 4917.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.2493, pruned_loss=0.05402, over 952016.10 frames. ], batch size: 38, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:53:31,225 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118744.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:53:37,408 INFO [finetune.py:976] (6/7) Epoch 21, batch 4200, loss[loss=0.1733, simple_loss=0.2554, pruned_loss=0.04565, over 4879.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2489, pruned_loss=0.05276, over 953947.42 frames. ], batch size: 43, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:53:39,820 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.932e+01 1.492e+02 1.761e+02 2.066e+02 3.902e+02, threshold=3.521e+02, percent-clipped=1.0 2023-03-27 01:54:11,365 INFO [finetune.py:976] (6/7) Epoch 21, batch 4250, loss[loss=0.1187, simple_loss=0.1816, pruned_loss=0.02792, over 4226.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2467, pruned_loss=0.05186, over 953070.71 frames. ], batch size: 18, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:54:25,982 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118824.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:54:38,751 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.59 vs. limit=5.0 2023-03-27 01:54:45,140 INFO [finetune.py:976] (6/7) Epoch 21, batch 4300, loss[loss=0.149, simple_loss=0.2046, pruned_loss=0.04672, over 4209.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2435, pruned_loss=0.05112, over 952997.82 frames. ], batch size: 18, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:54:47,579 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.108e+02 1.563e+02 1.855e+02 2.179e+02 3.656e+02, threshold=3.709e+02, percent-clipped=1.0 2023-03-27 01:54:57,553 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:55:18,874 INFO [finetune.py:976] (6/7) Epoch 21, batch 4350, loss[loss=0.1389, simple_loss=0.2012, pruned_loss=0.03828, over 4191.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2405, pruned_loss=0.05017, over 954134.07 frames. ], batch size: 18, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:55:33,226 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118925.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:55:48,467 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:55:59,294 INFO [finetune.py:976] (6/7) Epoch 21, batch 4400, loss[loss=0.2005, simple_loss=0.2669, pruned_loss=0.06711, over 4888.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2437, pruned_loss=0.05195, over 954284.60 frames. ], batch size: 32, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:56:01,712 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.753e+01 1.466e+02 1.745e+02 2.136e+02 3.634e+02, threshold=3.490e+02, percent-clipped=0.0 2023-03-27 01:56:18,412 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118973.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:31,298 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=118985.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:36,944 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=118987.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:39,994 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.4509, 2.9081, 2.6954, 1.3770, 3.0768, 2.3219, 1.7786, 2.5772], device='cuda:6'), covar=tensor([0.0770, 0.1170, 0.2014, 0.2647, 0.1454, 0.2149, 0.2949, 0.1420], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0182, 0.0210, 0.0209, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:56:40,528 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=118993.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:56:51,293 INFO [finetune.py:976] (6/7) Epoch 21, batch 4450, loss[loss=0.1905, simple_loss=0.2642, pruned_loss=0.05842, over 4735.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2461, pruned_loss=0.05231, over 954845.14 frames. ], batch size: 54, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:57:11,420 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119033.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:57:16,766 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2099, 1.8032, 1.6823, 0.9712, 1.9478, 2.0802, 1.9705, 1.6713], device='cuda:6'), covar=tensor([0.1207, 0.0570, 0.0621, 0.0663, 0.0477, 0.0629, 0.0428, 0.0702], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0149, 0.0126, 0.0123, 0.0130, 0.0128, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.9306e-05, 1.0747e-04, 8.9880e-05, 8.6632e-05, 9.1639e-05, 9.1562e-05, 1.0119e-04, 1.0565e-04], device='cuda:6') 2023-03-27 01:57:20,390 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 01:57:25,002 INFO [finetune.py:976] (6/7) Epoch 21, batch 4500, loss[loss=0.1711, simple_loss=0.2357, pruned_loss=0.05325, over 4925.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2475, pruned_loss=0.0528, over 952855.88 frames. ], batch size: 38, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:57:27,416 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.079e+02 1.548e+02 1.910e+02 2.429e+02 4.520e+02, threshold=3.820e+02, percent-clipped=3.0 2023-03-27 01:57:37,169 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9601, 1.1573, 1.8596, 1.8559, 1.6984, 1.6731, 1.7552, 1.8107], device='cuda:6'), covar=tensor([0.3650, 0.3410, 0.3164, 0.3481, 0.4760, 0.3876, 0.4308, 0.2976], device='cuda:6'), in_proj_covar=tensor([0.0254, 0.0241, 0.0263, 0.0281, 0.0279, 0.0256, 0.0289, 0.0244], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:57:38,336 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:57:49,668 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4733, 2.2961, 1.9263, 0.9352, 2.1397, 1.8525, 1.7031, 2.0741], device='cuda:6'), covar=tensor([0.1055, 0.0897, 0.1762, 0.2268, 0.1526, 0.2261, 0.2317, 0.1129], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0183, 0.0210, 0.0209, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 01:57:51,488 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119093.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:57:58,456 INFO [finetune.py:976] (6/7) Epoch 21, batch 4550, loss[loss=0.1683, simple_loss=0.2406, pruned_loss=0.04804, over 4863.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2489, pruned_loss=0.0534, over 952218.85 frames. ], batch size: 31, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:58:19,749 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 01:58:31,551 INFO [finetune.py:976] (6/7) Epoch 21, batch 4600, loss[loss=0.2078, simple_loss=0.2462, pruned_loss=0.08471, over 4023.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.248, pruned_loss=0.05291, over 952264.33 frames. ], batch size: 17, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:58:31,670 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119154.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 01:58:34,457 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.653e+02 1.869e+02 2.317e+02 3.451e+02, threshold=3.738e+02, percent-clipped=0.0 2023-03-27 01:59:05,268 INFO [finetune.py:976] (6/7) Epoch 21, batch 4650, loss[loss=0.1615, simple_loss=0.2291, pruned_loss=0.04691, over 4924.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2451, pruned_loss=0.05228, over 953562.77 frames. ], batch size: 38, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:59:13,786 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 01:59:37,347 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-27 01:59:38,315 INFO [finetune.py:976] (6/7) Epoch 21, batch 4700, loss[loss=0.142, simple_loss=0.208, pruned_loss=0.03805, over 4790.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2421, pruned_loss=0.05134, over 954585.61 frames. ], batch size: 29, lr: 3.18e-03, grad_scale: 64.0 2023-03-27 01:59:40,727 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.521e+02 1.791e+02 2.382e+02 6.096e+02, threshold=3.583e+02, percent-clipped=7.0 2023-03-27 01:59:59,480 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=119287.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:00:08,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8717, 1.3104, 1.8903, 1.8537, 1.6956, 1.6441, 1.8832, 1.7687], device='cuda:6'), covar=tensor([0.3500, 0.3637, 0.2913, 0.3372, 0.4224, 0.3501, 0.3719, 0.2690], device='cuda:6'), in_proj_covar=tensor([0.0257, 0.0244, 0.0265, 0.0285, 0.0282, 0.0259, 0.0292, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:00:11,569 INFO [finetune.py:976] (6/7) Epoch 21, batch 4750, loss[loss=0.1301, simple_loss=0.1976, pruned_loss=0.03127, over 4742.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2403, pruned_loss=0.05067, over 955985.10 frames. ], batch size: 27, lr: 3.17e-03, grad_scale: 64.0 2023-03-27 02:00:31,335 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:00:36,534 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4424, 2.3646, 2.4987, 1.8111, 2.2234, 2.5093, 2.5902, 1.9905], device='cuda:6'), covar=tensor([0.0592, 0.0605, 0.0619, 0.0886, 0.0726, 0.0669, 0.0577, 0.1126], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0136, 0.0140, 0.0120, 0.0126, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:00:37,148 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5117, 1.4168, 1.5336, 0.8300, 1.5150, 1.5371, 1.5206, 1.3417], device='cuda:6'), covar=tensor([0.0605, 0.0807, 0.0657, 0.0950, 0.0965, 0.0743, 0.0667, 0.1323], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0136, 0.0140, 0.0120, 0.0126, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:00:42,925 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6979, 1.4717, 2.0865, 3.0536, 2.1149, 2.2073, 1.0861, 2.5866], device='cuda:6'), covar=tensor([0.1552, 0.1369, 0.1165, 0.0656, 0.0753, 0.1831, 0.1608, 0.0494], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0166, 0.0102, 0.0139, 0.0126, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 02:00:44,658 INFO [finetune.py:976] (6/7) Epoch 21, batch 4800, loss[loss=0.2182, simple_loss=0.2898, pruned_loss=0.07332, over 4923.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.243, pruned_loss=0.05175, over 955414.72 frames. ], batch size: 38, lr: 3.17e-03, grad_scale: 64.0 2023-03-27 02:00:47,500 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.488e+02 1.781e+02 2.195e+02 3.360e+02, threshold=3.562e+02, percent-clipped=0.0 2023-03-27 02:01:22,405 INFO [finetune.py:976] (6/7) Epoch 21, batch 4850, loss[loss=0.1394, simple_loss=0.2181, pruned_loss=0.03033, over 4879.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2471, pruned_loss=0.05299, over 957109.25 frames. ], batch size: 32, lr: 3.17e-03, grad_scale: 64.0 2023-03-27 02:01:47,012 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5181, 3.4475, 3.2435, 1.5309, 3.5577, 2.6278, 0.8528, 2.2470], device='cuda:6'), covar=tensor([0.2381, 0.2082, 0.1878, 0.3334, 0.1160, 0.1020, 0.4156, 0.1542], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0178, 0.0160, 0.0130, 0.0161, 0.0123, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 02:01:55,526 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=119431.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:02:13,811 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-27 02:02:15,266 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=119449.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 02:02:19,174 INFO [finetune.py:976] (6/7) Epoch 21, batch 4900, loss[loss=0.2127, simple_loss=0.281, pruned_loss=0.0722, over 4859.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2478, pruned_loss=0.05272, over 957226.83 frames. ], batch size: 34, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:02:25,215 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.684e+02 1.937e+02 2.365e+02 4.201e+02, threshold=3.874e+02, percent-clipped=2.0 2023-03-27 02:02:39,406 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.86 vs. limit=2.0 2023-03-27 02:02:47,617 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6953, 0.8088, 1.8442, 1.7242, 1.6360, 1.5857, 1.6579, 1.7554], device='cuda:6'), covar=tensor([0.3634, 0.3756, 0.3132, 0.3427, 0.4504, 0.3391, 0.3906, 0.2816], device='cuda:6'), in_proj_covar=tensor([0.0256, 0.0243, 0.0265, 0.0284, 0.0282, 0.0258, 0.0292, 0.0246], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:02:55,582 INFO [finetune.py:976] (6/7) Epoch 21, batch 4950, loss[loss=0.18, simple_loss=0.2269, pruned_loss=0.06653, over 4254.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2492, pruned_loss=0.05282, over 956666.73 frames. ], batch size: 18, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:02:56,420 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.90 vs. limit=5.0 2023-03-27 02:03:02,750 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119514.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:03:05,267 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1847, 2.1584, 1.9499, 2.3632, 2.8200, 2.2953, 2.1611, 1.7248], device='cuda:6'), covar=tensor([0.2230, 0.1964, 0.1927, 0.1744, 0.1692, 0.1113, 0.2106, 0.1930], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0214, 0.0195, 0.0243, 0.0188, 0.0217, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:03:18,392 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5876, 1.4815, 1.3140, 1.5764, 1.7443, 1.6284, 1.0942, 1.3727], device='cuda:6'), covar=tensor([0.2008, 0.1901, 0.1831, 0.1590, 0.1342, 0.1175, 0.2228, 0.1749], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0214, 0.0195, 0.0243, 0.0189, 0.0218, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:03:29,006 INFO [finetune.py:976] (6/7) Epoch 21, batch 5000, loss[loss=0.1521, simple_loss=0.2292, pruned_loss=0.03755, over 4906.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2485, pruned_loss=0.05277, over 958058.70 frames. ], batch size: 36, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:03:32,980 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.554e+02 1.853e+02 2.138e+02 3.358e+02, threshold=3.705e+02, percent-clipped=0.0 2023-03-27 02:03:43,312 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119575.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:04:02,226 INFO [finetune.py:976] (6/7) Epoch 21, batch 5050, loss[loss=0.1572, simple_loss=0.2287, pruned_loss=0.04286, over 4823.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2448, pruned_loss=0.05157, over 956305.65 frames. ], batch size: 41, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:04:33,054 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-27 02:04:35,257 INFO [finetune.py:976] (6/7) Epoch 21, batch 5100, loss[loss=0.1457, simple_loss=0.2125, pruned_loss=0.03941, over 4751.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2408, pruned_loss=0.04996, over 956775.14 frames. ], batch size: 27, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:04:37,070 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5218, 1.6035, 2.1135, 1.8313, 1.7973, 3.6144, 1.4900, 1.6604], device='cuda:6'), covar=tensor([0.0855, 0.1624, 0.0968, 0.0928, 0.1404, 0.0215, 0.1417, 0.1687], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:04:39,201 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.052e+02 1.479e+02 1.750e+02 2.173e+02 3.976e+02, threshold=3.500e+02, percent-clipped=1.0 2023-03-27 02:04:42,346 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.61 vs. limit=2.0 2023-03-27 02:04:43,505 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=119665.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:04:47,051 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0350, 1.3581, 0.7614, 1.9506, 2.5039, 1.8249, 1.5576, 1.8296], device='cuda:6'), covar=tensor([0.1522, 0.2112, 0.2263, 0.1255, 0.1764, 0.1929, 0.1589, 0.2091], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0120, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 02:05:01,339 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6270, 1.5459, 2.2513, 1.9185, 1.9722, 4.0885, 1.4614, 1.6973], device='cuda:6'), covar=tensor([0.0813, 0.1672, 0.1089, 0.0926, 0.1382, 0.0178, 0.1393, 0.1631], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:05:08,857 INFO [finetune.py:976] (6/7) Epoch 21, batch 5150, loss[loss=0.2482, simple_loss=0.3192, pruned_loss=0.08861, over 4773.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.241, pruned_loss=0.05007, over 957085.07 frames. ], batch size: 59, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:05:13,665 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1557, 2.1075, 1.6506, 2.3188, 2.1081, 1.8189, 2.4645, 2.1701], device='cuda:6'), covar=tensor([0.1226, 0.1938, 0.2808, 0.2160, 0.2261, 0.1566, 0.2939, 0.1615], device='cuda:6'), in_proj_covar=tensor([0.0186, 0.0187, 0.0233, 0.0251, 0.0244, 0.0202, 0.0213, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:05:16,976 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6778, 2.5340, 2.1331, 1.1391, 2.3035, 2.0459, 1.8487, 2.3397], device='cuda:6'), covar=tensor([0.0789, 0.0758, 0.1583, 0.1955, 0.1372, 0.2123, 0.2138, 0.0914], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0198, 0.0183, 0.0209, 0.0208, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:05:24,661 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=119726.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:05:27,584 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=119731.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:05:30,095 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0866, 2.0708, 1.8588, 2.2649, 2.4507, 2.2118, 2.0267, 1.5806], device='cuda:6'), covar=tensor([0.1861, 0.1614, 0.1543, 0.1342, 0.1701, 0.0984, 0.1777, 0.1629], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0214, 0.0196, 0.0244, 0.0189, 0.0218, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:05:38,887 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=119749.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 02:05:42,228 INFO [finetune.py:976] (6/7) Epoch 21, batch 5200, loss[loss=0.1921, simple_loss=0.2669, pruned_loss=0.05866, over 4898.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2458, pruned_loss=0.05189, over 954705.57 frames. ], batch size: 35, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:05:45,722 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.656e+02 1.877e+02 2.270e+02 4.720e+02, threshold=3.754e+02, percent-clipped=1.0 2023-03-27 02:05:59,325 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119779.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:06:10,304 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 02:06:10,789 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=119797.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:06:15,132 INFO [finetune.py:976] (6/7) Epoch 21, batch 5250, loss[loss=0.1363, simple_loss=0.216, pruned_loss=0.02831, over 4755.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2477, pruned_loss=0.05194, over 955681.67 frames. ], batch size: 27, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:06:59,458 INFO [finetune.py:976] (6/7) Epoch 21, batch 5300, loss[loss=0.2187, simple_loss=0.2779, pruned_loss=0.07969, over 4915.00 frames. ], tot_loss[loss=0.1776, simple_loss=0.2495, pruned_loss=0.05279, over 955543.22 frames. ], batch size: 41, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:07:06,786 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-27 02:07:07,160 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.978e+01 1.512e+02 1.747e+02 2.069e+02 4.039e+02, threshold=3.495e+02, percent-clipped=1.0 2023-03-27 02:07:10,170 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.48 vs. limit=2.0 2023-03-27 02:07:19,027 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=119870.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:07:54,828 INFO [finetune.py:976] (6/7) Epoch 21, batch 5350, loss[loss=0.1501, simple_loss=0.2171, pruned_loss=0.04152, over 4854.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.249, pruned_loss=0.05261, over 955623.05 frames. ], batch size: 31, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:08:00,891 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6990, 1.1921, 0.9087, 1.6687, 2.1532, 1.5976, 1.3719, 1.6236], device='cuda:6'), covar=tensor([0.1485, 0.2134, 0.1953, 0.1216, 0.1818, 0.1917, 0.1547, 0.2003], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 02:08:10,926 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2633, 1.9135, 2.4158, 1.7084, 2.2615, 2.5507, 1.7810, 2.6125], device='cuda:6'), covar=tensor([0.1209, 0.1836, 0.1485, 0.2003, 0.0898, 0.1230, 0.2589, 0.0796], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0205, 0.0192, 0.0190, 0.0174, 0.0214, 0.0217, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:08:10,941 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6519, 1.7691, 1.7806, 0.9392, 1.9226, 2.0582, 2.0110, 1.5599], device='cuda:6'), covar=tensor([0.1135, 0.0749, 0.0551, 0.0627, 0.0423, 0.0681, 0.0386, 0.0787], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0150, 0.0127, 0.0124, 0.0132, 0.0129, 0.0143, 0.0149], device='cuda:6'), out_proj_covar=tensor([9.0179e-05, 1.0860e-04, 9.0994e-05, 8.7453e-05, 9.2694e-05, 9.2166e-05, 1.0229e-04, 1.0667e-04], device='cuda:6') 2023-03-27 02:08:28,087 INFO [finetune.py:976] (6/7) Epoch 21, batch 5400, loss[loss=0.1462, simple_loss=0.2222, pruned_loss=0.03509, over 4839.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2465, pruned_loss=0.05219, over 955525.86 frames. ], batch size: 30, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:08:31,173 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.474e+02 1.662e+02 2.015e+02 3.492e+02, threshold=3.324e+02, percent-clipped=0.0 2023-03-27 02:09:02,906 INFO [finetune.py:976] (6/7) Epoch 21, batch 5450, loss[loss=0.151, simple_loss=0.2221, pruned_loss=0.03991, over 4936.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.244, pruned_loss=0.05182, over 957193.27 frames. ], batch size: 33, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:09:13,780 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=120021.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:09:27,889 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5537, 1.4399, 1.5767, 0.8229, 1.4827, 1.5639, 1.5640, 1.3705], device='cuda:6'), covar=tensor([0.0551, 0.0748, 0.0652, 0.0906, 0.0985, 0.0763, 0.0668, 0.1251], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0139, 0.0121, 0.0126, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:09:36,197 INFO [finetune.py:976] (6/7) Epoch 21, batch 5500, loss[loss=0.1308, simple_loss=0.2009, pruned_loss=0.03033, over 4761.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2402, pruned_loss=0.05, over 956604.05 frames. ], batch size: 27, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:09:39,681 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.953e+01 1.422e+02 1.683e+02 2.099e+02 3.794e+02, threshold=3.366e+02, percent-clipped=2.0 2023-03-27 02:09:49,456 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1389, 2.2031, 1.8499, 2.2432, 2.1149, 2.0381, 2.0594, 2.8959], device='cuda:6'), covar=tensor([0.3957, 0.4609, 0.3511, 0.4181, 0.4437, 0.2478, 0.4557, 0.1743], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0262, 0.0233, 0.0278, 0.0254, 0.0224, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:10:09,942 INFO [finetune.py:976] (6/7) Epoch 21, batch 5550, loss[loss=0.1839, simple_loss=0.2552, pruned_loss=0.05636, over 4905.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2412, pruned_loss=0.05042, over 955047.41 frames. ], batch size: 36, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:10:21,978 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7080, 1.5851, 1.9475, 3.0114, 2.0711, 2.2977, 1.1447, 2.5268], device='cuda:6'), covar=tensor([0.1640, 0.1355, 0.1260, 0.0559, 0.0854, 0.1240, 0.1704, 0.0538], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0166, 0.0102, 0.0139, 0.0126, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 02:10:25,020 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4740, 1.6240, 1.2431, 1.6473, 1.9335, 1.7604, 1.6239, 1.3873], device='cuda:6'), covar=tensor([0.0370, 0.0302, 0.0694, 0.0308, 0.0223, 0.0578, 0.0346, 0.0404], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0107, 0.0145, 0.0112, 0.0100, 0.0112, 0.0101, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.6570e-05, 8.2428e-05, 1.1403e-04, 8.6184e-05, 7.7915e-05, 8.3031e-05, 7.5353e-05, 8.6506e-05], device='cuda:6') 2023-03-27 02:10:42,286 INFO [finetune.py:976] (6/7) Epoch 21, batch 5600, loss[loss=0.1799, simple_loss=0.2508, pruned_loss=0.05445, over 4213.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2438, pruned_loss=0.05045, over 952725.87 frames. ], batch size: 65, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:10:45,197 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.550e+02 1.831e+02 2.203e+02 3.727e+02, threshold=3.662e+02, percent-clipped=1.0 2023-03-27 02:10:45,599 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 02:10:49,954 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6049, 1.3874, 2.0351, 1.8171, 1.4926, 3.5590, 1.2886, 1.4418], device='cuda:6'), covar=tensor([0.0958, 0.1962, 0.1158, 0.0976, 0.1805, 0.0212, 0.1633, 0.1985], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:10:52,178 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=120170.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:11:05,960 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 02:11:12,066 INFO [finetune.py:976] (6/7) Epoch 21, batch 5650, loss[loss=0.1783, simple_loss=0.2472, pruned_loss=0.05466, over 4801.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2477, pruned_loss=0.05168, over 953043.85 frames. ], batch size: 41, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:11:20,664 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=120218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:11:31,322 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3358, 2.3118, 1.9180, 2.0967, 2.8710, 2.8787, 2.3021, 2.3421], device='cuda:6'), covar=tensor([0.0355, 0.0342, 0.0563, 0.0324, 0.0237, 0.0631, 0.0402, 0.0365], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0108, 0.0145, 0.0112, 0.0100, 0.0112, 0.0101, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.6730e-05, 8.2510e-05, 1.1387e-04, 8.6221e-05, 7.7816e-05, 8.3057e-05, 7.5427e-05, 8.6534e-05], device='cuda:6') 2023-03-27 02:11:32,483 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9752, 1.9410, 1.8309, 1.9473, 1.7837, 3.8371, 1.9611, 2.4156], device='cuda:6'), covar=tensor([0.3815, 0.2803, 0.2123, 0.2675, 0.1391, 0.0265, 0.2114, 0.0921], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0124, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:11:41,961 INFO [finetune.py:976] (6/7) Epoch 21, batch 5700, loss[loss=0.1985, simple_loss=0.261, pruned_loss=0.06797, over 4226.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2442, pruned_loss=0.05133, over 934003.17 frames. ], batch size: 18, lr: 3.17e-03, grad_scale: 32.0 2023-03-27 02:11:44,943 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.458e+02 1.705e+02 2.128e+02 3.595e+02, threshold=3.409e+02, percent-clipped=0.0 2023-03-27 02:11:45,032 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0669, 1.7396, 1.1279, 1.9540, 2.4097, 1.7051, 1.8615, 2.0602], device='cuda:6'), covar=tensor([0.1226, 0.1626, 0.1725, 0.1074, 0.1679, 0.1751, 0.1233, 0.1648], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 02:11:45,042 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7184, 1.7538, 2.2650, 1.8999, 1.9791, 3.5992, 1.7369, 1.8572], device='cuda:6'), covar=tensor([0.0831, 0.1462, 0.0901, 0.0860, 0.1265, 0.0255, 0.1221, 0.1396], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:12:12,192 INFO [finetune.py:976] (6/7) Epoch 22, batch 0, loss[loss=0.2093, simple_loss=0.2835, pruned_loss=0.06758, over 4904.00 frames. ], tot_loss[loss=0.2093, simple_loss=0.2835, pruned_loss=0.06758, over 4904.00 frames. ], batch size: 36, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:12:12,192 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 02:12:27,778 INFO [finetune.py:1010] (6/7) Epoch 22, validation: loss=0.1597, simple_loss=0.228, pruned_loss=0.04574, over 2265189.00 frames. 2023-03-27 02:12:27,778 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6385MB 2023-03-27 02:12:35,245 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0917, 2.0227, 2.1350, 1.6012, 2.0550, 2.2917, 2.2597, 1.7136], device='cuda:6'), covar=tensor([0.0509, 0.0542, 0.0568, 0.0733, 0.0955, 0.0520, 0.0451, 0.1098], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0139, 0.0121, 0.0125, 0.0139, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:13:15,375 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=120321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:13:27,847 INFO [finetune.py:976] (6/7) Epoch 22, batch 50, loss[loss=0.1797, simple_loss=0.2553, pruned_loss=0.05203, over 4921.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2508, pruned_loss=0.0536, over 215773.66 frames. ], batch size: 33, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:13:48,725 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.043e+02 1.631e+02 1.972e+02 2.363e+02 4.295e+02, threshold=3.943e+02, percent-clipped=3.0 2023-03-27 02:13:55,431 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=120369.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:14:04,334 INFO [finetune.py:976] (6/7) Epoch 22, batch 100, loss[loss=0.2061, simple_loss=0.2666, pruned_loss=0.0728, over 4817.00 frames. ], tot_loss[loss=0.1787, simple_loss=0.248, pruned_loss=0.05475, over 379837.18 frames. ], batch size: 39, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:14:12,146 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 02:14:37,019 INFO [finetune.py:976] (6/7) Epoch 22, batch 150, loss[loss=0.133, simple_loss=0.2098, pruned_loss=0.02815, over 4740.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2434, pruned_loss=0.05391, over 507041.92 frames. ], batch size: 54, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:14:49,053 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-27 02:14:55,338 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.445e+02 1.737e+02 2.098e+02 4.550e+02, threshold=3.473e+02, percent-clipped=2.0 2023-03-27 02:15:03,516 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 02:15:10,228 INFO [finetune.py:976] (6/7) Epoch 22, batch 200, loss[loss=0.1995, simple_loss=0.2776, pruned_loss=0.06069, over 4814.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2398, pruned_loss=0.05118, over 608250.45 frames. ], batch size: 41, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:15:14,959 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1932, 3.6118, 3.7814, 4.0542, 3.9340, 3.6915, 4.2955, 1.3934], device='cuda:6'), covar=tensor([0.0825, 0.0940, 0.0866, 0.0940, 0.1346, 0.1712, 0.0741, 0.5604], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0243, 0.0281, 0.0292, 0.0334, 0.0284, 0.0305, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:15:20,008 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6456, 1.5009, 1.4382, 1.5852, 1.1671, 3.4900, 1.3779, 1.8935], device='cuda:6'), covar=tensor([0.3301, 0.2583, 0.2219, 0.2419, 0.1778, 0.0196, 0.2668, 0.1150], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0113, 0.0096, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:15:42,765 INFO [finetune.py:976] (6/7) Epoch 22, batch 250, loss[loss=0.1539, simple_loss=0.2272, pruned_loss=0.04027, over 4826.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2431, pruned_loss=0.05201, over 685615.57 frames. ], batch size: 30, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:16:01,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.519e+02 1.843e+02 2.180e+02 3.548e+02, threshold=3.686e+02, percent-clipped=1.0 2023-03-27 02:16:06,899 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6558, 1.5472, 1.0929, 0.2322, 1.2759, 1.4695, 1.5468, 1.5175], device='cuda:6'), covar=tensor([0.0843, 0.0834, 0.1362, 0.2124, 0.1381, 0.2273, 0.2175, 0.0830], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0190, 0.0197, 0.0182, 0.0208, 0.0207, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:16:16,409 INFO [finetune.py:976] (6/7) Epoch 22, batch 300, loss[loss=0.1858, simple_loss=0.263, pruned_loss=0.05425, over 4924.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.249, pruned_loss=0.05335, over 746562.14 frames. ], batch size: 33, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:16:50,474 INFO [finetune.py:976] (6/7) Epoch 22, batch 350, loss[loss=0.231, simple_loss=0.2959, pruned_loss=0.08306, over 4809.00 frames. ], tot_loss[loss=0.1799, simple_loss=0.2514, pruned_loss=0.05424, over 794324.15 frames. ], batch size: 39, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:17:09,292 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.647e+02 1.881e+02 2.280e+02 4.594e+02, threshold=3.762e+02, percent-clipped=3.0 2023-03-27 02:17:19,795 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=120676.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:17:23,373 INFO [finetune.py:976] (6/7) Epoch 22, batch 400, loss[loss=0.1698, simple_loss=0.247, pruned_loss=0.04634, over 4896.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2503, pruned_loss=0.05303, over 830329.75 frames. ], batch size: 37, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:18:12,315 INFO [finetune.py:976] (6/7) Epoch 22, batch 450, loss[loss=0.1582, simple_loss=0.246, pruned_loss=0.03527, over 4811.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2504, pruned_loss=0.05363, over 859327.45 frames. ], batch size: 40, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:18:20,746 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=120737.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:18:43,320 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.581e+02 1.866e+02 2.243e+02 3.725e+02, threshold=3.731e+02, percent-clipped=0.0 2023-03-27 02:19:07,337 INFO [finetune.py:976] (6/7) Epoch 22, batch 500, loss[loss=0.1338, simple_loss=0.2001, pruned_loss=0.03377, over 4853.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2473, pruned_loss=0.05324, over 881800.77 frames. ], batch size: 49, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:19:40,276 INFO [finetune.py:976] (6/7) Epoch 22, batch 550, loss[loss=0.1241, simple_loss=0.2072, pruned_loss=0.02051, over 4766.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2446, pruned_loss=0.05219, over 898777.66 frames. ], batch size: 28, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:19:42,239 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1763, 2.1009, 1.8546, 2.3848, 2.7798, 2.2375, 2.1603, 1.6924], device='cuda:6'), covar=tensor([0.2249, 0.2018, 0.1999, 0.1481, 0.1623, 0.1211, 0.2117, 0.1906], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0211, 0.0215, 0.0196, 0.0244, 0.0190, 0.0219, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:19:45,530 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-27 02:19:49,425 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1995, 1.9466, 2.3055, 2.1671, 1.9293, 1.9500, 2.2065, 2.1970], device='cuda:6'), covar=tensor([0.3656, 0.3710, 0.2758, 0.3769, 0.4579, 0.4100, 0.4455, 0.2597], device='cuda:6'), in_proj_covar=tensor([0.0254, 0.0241, 0.0262, 0.0282, 0.0281, 0.0257, 0.0289, 0.0244], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:19:58,120 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.544e+02 1.778e+02 2.172e+02 3.720e+02, threshold=3.555e+02, percent-clipped=0.0 2023-03-27 02:20:04,757 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0208, 1.8255, 1.6488, 1.6666, 1.7512, 1.7896, 1.7886, 2.4734], device='cuda:6'), covar=tensor([0.3890, 0.4232, 0.3188, 0.3742, 0.4049, 0.2316, 0.3645, 0.1774], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0233, 0.0278, 0.0255, 0.0224, 0.0254, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:20:13,119 INFO [finetune.py:976] (6/7) Epoch 22, batch 600, loss[loss=0.152, simple_loss=0.2312, pruned_loss=0.03645, over 4817.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2443, pruned_loss=0.05226, over 911617.10 frames. ], batch size: 39, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:20:46,527 INFO [finetune.py:976] (6/7) Epoch 22, batch 650, loss[loss=0.2324, simple_loss=0.292, pruned_loss=0.08645, over 4831.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.249, pruned_loss=0.05391, over 922548.12 frames. ], batch size: 49, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:21:03,254 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-27 02:21:04,772 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.580e+02 1.877e+02 2.237e+02 3.344e+02, threshold=3.754e+02, percent-clipped=0.0 2023-03-27 02:21:20,033 INFO [finetune.py:976] (6/7) Epoch 22, batch 700, loss[loss=0.1401, simple_loss=0.2146, pruned_loss=0.03284, over 4932.00 frames. ], tot_loss[loss=0.1801, simple_loss=0.2511, pruned_loss=0.0545, over 929557.10 frames. ], batch size: 38, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:21:32,897 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121003.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:21:53,221 INFO [finetune.py:976] (6/7) Epoch 22, batch 750, loss[loss=0.2108, simple_loss=0.2785, pruned_loss=0.07158, over 4810.00 frames. ], tot_loss[loss=0.1802, simple_loss=0.2515, pruned_loss=0.05447, over 935375.36 frames. ], batch size: 39, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:21:53,319 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121032.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:22:09,808 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.017e+02 1.617e+02 1.906e+02 2.410e+02 4.829e+02, threshold=3.812e+02, percent-clipped=5.0 2023-03-27 02:22:14,827 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121064.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:22:15,452 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7267, 1.8380, 1.5275, 1.6999, 2.2405, 2.2159, 2.0154, 1.8876], device='cuda:6'), covar=tensor([0.0477, 0.0315, 0.0589, 0.0287, 0.0250, 0.0555, 0.0270, 0.0326], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0107, 0.0143, 0.0111, 0.0099, 0.0111, 0.0100, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.6103e-05, 8.1905e-05, 1.1247e-04, 8.5177e-05, 7.6899e-05, 8.1930e-05, 7.4616e-05, 8.5577e-05], device='cuda:6') 2023-03-27 02:22:26,764 INFO [finetune.py:976] (6/7) Epoch 22, batch 800, loss[loss=0.1974, simple_loss=0.2629, pruned_loss=0.06592, over 4721.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.2497, pruned_loss=0.05345, over 938714.09 frames. ], batch size: 54, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:22:33,097 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.0722, 0.9871, 0.9914, 0.4426, 0.9227, 1.1367, 1.0689, 0.9927], device='cuda:6'), covar=tensor([0.0812, 0.0546, 0.0602, 0.0436, 0.0643, 0.0564, 0.0380, 0.0625], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0149, 0.0125, 0.0123, 0.0130, 0.0128, 0.0140, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.9036e-05, 1.0794e-04, 8.9631e-05, 8.6375e-05, 9.1455e-05, 9.1485e-05, 1.0071e-04, 1.0558e-04], device='cuda:6') 2023-03-27 02:22:47,240 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8480, 3.4128, 3.5386, 3.6870, 3.6192, 3.4301, 3.9388, 1.0947], device='cuda:6'), covar=tensor([0.0927, 0.0948, 0.0921, 0.1114, 0.1549, 0.1808, 0.0887, 0.6035], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0242, 0.0279, 0.0290, 0.0332, 0.0283, 0.0303, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:23:10,263 INFO [finetune.py:976] (6/7) Epoch 22, batch 850, loss[loss=0.1435, simple_loss=0.2077, pruned_loss=0.0397, over 4791.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2475, pruned_loss=0.05248, over 941773.64 frames. ], batch size: 25, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:23:27,591 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5117, 1.5627, 2.3374, 1.8320, 1.7033, 3.8725, 1.4472, 1.6685], device='cuda:6'), covar=tensor([0.0908, 0.1712, 0.1013, 0.0912, 0.1542, 0.0209, 0.1479, 0.1671], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:23:28,702 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.269e+01 1.509e+02 1.830e+02 2.168e+02 4.982e+02, threshold=3.659e+02, percent-clipped=1.0 2023-03-27 02:23:37,197 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121164.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:23:58,240 INFO [finetune.py:976] (6/7) Epoch 22, batch 900, loss[loss=0.148, simple_loss=0.2244, pruned_loss=0.03578, over 4864.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2442, pruned_loss=0.05121, over 945500.01 frames. ], batch size: 34, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:24:09,543 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7152, 1.6534, 1.5183, 1.8799, 2.1663, 1.8846, 1.4311, 1.4287], device='cuda:6'), covar=tensor([0.2400, 0.2230, 0.2161, 0.1729, 0.1708, 0.1377, 0.2836, 0.2005], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0209, 0.0213, 0.0195, 0.0242, 0.0189, 0.0217, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:24:38,340 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121225.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:24:38,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8966, 1.4176, 2.0255, 1.9143, 1.7270, 1.6749, 1.9087, 1.9283], device='cuda:6'), covar=tensor([0.2777, 0.2935, 0.2263, 0.2865, 0.3535, 0.3176, 0.2986, 0.2279], device='cuda:6'), in_proj_covar=tensor([0.0255, 0.0243, 0.0263, 0.0284, 0.0282, 0.0258, 0.0291, 0.0245], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:24:42,942 INFO [finetune.py:976] (6/7) Epoch 22, batch 950, loss[loss=0.1973, simple_loss=0.2717, pruned_loss=0.0614, over 4928.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2431, pruned_loss=0.05138, over 947414.27 frames. ], batch size: 38, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:24:44,925 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3083, 1.2601, 1.2320, 0.7342, 1.2740, 1.4703, 1.4694, 1.2262], device='cuda:6'), covar=tensor([0.0771, 0.0498, 0.0499, 0.0469, 0.0486, 0.0443, 0.0308, 0.0537], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0149, 0.0125, 0.0122, 0.0130, 0.0129, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.9046e-05, 1.0788e-04, 8.9450e-05, 8.6305e-05, 9.1685e-05, 9.1739e-05, 1.0081e-04, 1.0533e-04], device='cuda:6') 2023-03-27 02:24:59,303 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.960e+01 1.496e+02 1.804e+02 2.225e+02 4.174e+02, threshold=3.608e+02, percent-clipped=1.0 2023-03-27 02:25:16,227 INFO [finetune.py:976] (6/7) Epoch 22, batch 1000, loss[loss=0.1783, simple_loss=0.2466, pruned_loss=0.05497, over 4896.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2459, pruned_loss=0.05317, over 949433.50 frames. ], batch size: 32, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:25:28,382 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7364, 1.7691, 1.5996, 2.0194, 2.1963, 2.0127, 1.5677, 1.4225], device='cuda:6'), covar=tensor([0.2394, 0.2032, 0.1968, 0.1513, 0.1907, 0.1305, 0.2573, 0.2050], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0211, 0.0215, 0.0197, 0.0244, 0.0190, 0.0219, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:25:33,754 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121311.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 02:25:49,276 INFO [finetune.py:976] (6/7) Epoch 22, batch 1050, loss[loss=0.1394, simple_loss=0.2306, pruned_loss=0.02415, over 4764.00 frames. ], tot_loss[loss=0.1774, simple_loss=0.2482, pruned_loss=0.05328, over 951207.48 frames. ], batch size: 28, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:25:49,363 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121332.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:05,530 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.073e+02 1.613e+02 2.056e+02 2.633e+02 6.948e+02, threshold=4.113e+02, percent-clipped=5.0 2023-03-27 02:26:05,614 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121359.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:13,986 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121372.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 02:26:19,242 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=121380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:20,876 INFO [finetune.py:976] (6/7) Epoch 22, batch 1100, loss[loss=0.1782, simple_loss=0.2445, pruned_loss=0.05594, over 4907.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2498, pruned_loss=0.0541, over 952324.47 frames. ], batch size: 43, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:26:26,989 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 02:26:36,777 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121407.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:26:53,106 INFO [finetune.py:976] (6/7) Epoch 22, batch 1150, loss[loss=0.1567, simple_loss=0.2248, pruned_loss=0.04429, over 4804.00 frames. ], tot_loss[loss=0.179, simple_loss=0.2501, pruned_loss=0.05395, over 953751.71 frames. ], batch size: 25, lr: 3.16e-03, grad_scale: 32.0 2023-03-27 02:27:05,121 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2172, 2.2049, 2.3045, 1.5796, 2.1427, 2.3119, 2.2347, 1.7388], device='cuda:6'), covar=tensor([0.0519, 0.0521, 0.0590, 0.0860, 0.0704, 0.0642, 0.0573, 0.1094], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0141, 0.0121, 0.0127, 0.0140, 0.0141, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:27:10,453 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.055e+01 1.683e+02 1.964e+02 2.424e+02 3.625e+02, threshold=3.928e+02, percent-clipped=0.0 2023-03-27 02:27:15,971 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=121468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:27:25,673 INFO [finetune.py:976] (6/7) Epoch 22, batch 1200, loss[loss=0.1641, simple_loss=0.2322, pruned_loss=0.04794, over 4811.00 frames. ], tot_loss[loss=0.1771, simple_loss=0.2479, pruned_loss=0.05313, over 955373.68 frames. ], batch size: 39, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:27:49,700 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121520.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:27:57,407 INFO [finetune.py:976] (6/7) Epoch 22, batch 1250, loss[loss=0.1616, simple_loss=0.239, pruned_loss=0.0421, over 4929.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2451, pruned_loss=0.05217, over 954694.18 frames. ], batch size: 38, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:28:26,677 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.015e+02 1.487e+02 1.790e+02 2.203e+02 4.512e+02, threshold=3.581e+02, percent-clipped=1.0 2023-03-27 02:28:40,601 INFO [finetune.py:976] (6/7) Epoch 22, batch 1300, loss[loss=0.1717, simple_loss=0.2385, pruned_loss=0.05243, over 4929.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2426, pruned_loss=0.05111, over 956170.18 frames. ], batch size: 33, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:28:55,854 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 02:29:38,999 INFO [finetune.py:976] (6/7) Epoch 22, batch 1350, loss[loss=0.1878, simple_loss=0.2477, pruned_loss=0.06396, over 4821.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2441, pruned_loss=0.05185, over 956267.79 frames. ], batch size: 33, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:29:51,873 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6383, 3.3865, 3.2501, 1.4525, 3.5644, 2.7545, 0.8046, 2.3463], device='cuda:6'), covar=tensor([0.2438, 0.2410, 0.1622, 0.3447, 0.1075, 0.0927, 0.4413, 0.1513], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0178, 0.0159, 0.0130, 0.0160, 0.0123, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 02:30:02,099 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.135e+02 1.546e+02 1.913e+02 2.289e+02 6.231e+02, threshold=3.826e+02, percent-clipped=1.0 2023-03-27 02:30:02,199 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121659.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:30:05,094 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 02:30:07,035 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121667.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 02:30:16,104 INFO [finetune.py:976] (6/7) Epoch 22, batch 1400, loss[loss=0.1632, simple_loss=0.231, pruned_loss=0.04768, over 4919.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2466, pruned_loss=0.05231, over 958383.02 frames. ], batch size: 36, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:30:24,841 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4236, 1.3334, 2.0084, 1.6611, 1.5922, 3.3319, 1.3311, 1.3871], device='cuda:6'), covar=tensor([0.1063, 0.2089, 0.1280, 0.1116, 0.1708, 0.0301, 0.1845, 0.2163], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:30:34,344 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=121707.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:30:36,210 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5857, 1.4304, 1.4559, 1.5056, 0.9906, 2.9791, 1.1429, 1.5426], device='cuda:6'), covar=tensor([0.3411, 0.2584, 0.2235, 0.2474, 0.1945, 0.0252, 0.2679, 0.1323], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:30:49,371 INFO [finetune.py:976] (6/7) Epoch 22, batch 1450, loss[loss=0.23, simple_loss=0.2914, pruned_loss=0.08427, over 4877.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2492, pruned_loss=0.05339, over 958890.93 frames. ], batch size: 34, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:30:58,325 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.5681, 3.2048, 2.8498, 1.6054, 2.9981, 2.4348, 2.3027, 2.7712], device='cuda:6'), covar=tensor([0.0664, 0.0757, 0.1606, 0.2034, 0.1390, 0.2052, 0.1975, 0.1010], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0193, 0.0200, 0.0183, 0.0210, 0.0209, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:31:08,590 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.585e+02 1.838e+02 2.176e+02 4.319e+02, threshold=3.676e+02, percent-clipped=1.0 2023-03-27 02:31:11,101 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=121763.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:31:17,064 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3719, 3.8792, 4.1028, 4.0628, 3.9300, 3.8196, 4.4990, 1.5203], device='cuda:6'), covar=tensor([0.1135, 0.1506, 0.1283, 0.1658, 0.1864, 0.2308, 0.1011, 0.7528], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0244, 0.0280, 0.0290, 0.0332, 0.0284, 0.0303, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:31:22,464 INFO [finetune.py:976] (6/7) Epoch 22, batch 1500, loss[loss=0.1962, simple_loss=0.2539, pruned_loss=0.06927, over 4888.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2505, pruned_loss=0.05396, over 959105.41 frames. ], batch size: 35, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:31:33,742 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7109, 1.7310, 1.4942, 1.8323, 2.3755, 1.9190, 1.5949, 1.4222], device='cuda:6'), covar=tensor([0.2187, 0.1906, 0.1934, 0.1600, 0.1557, 0.1185, 0.2362, 0.1923], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0210, 0.0215, 0.0196, 0.0243, 0.0189, 0.0218, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:31:49,136 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121820.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:31:56,351 INFO [finetune.py:976] (6/7) Epoch 22, batch 1550, loss[loss=0.201, simple_loss=0.267, pruned_loss=0.0675, over 4905.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2507, pruned_loss=0.05396, over 958807.74 frames. ], batch size: 36, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:32:00,186 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.06 vs. limit=5.0 2023-03-27 02:32:15,626 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.581e+02 1.856e+02 2.152e+02 3.350e+02, threshold=3.712e+02, percent-clipped=0.0 2023-03-27 02:32:21,171 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=121868.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:32:29,538 INFO [finetune.py:976] (6/7) Epoch 22, batch 1600, loss[loss=0.1884, simple_loss=0.2536, pruned_loss=0.06159, over 4875.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2489, pruned_loss=0.05338, over 958752.33 frames. ], batch size: 31, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:32:58,518 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3620, 1.4352, 1.7020, 1.5074, 1.5948, 2.9728, 1.4079, 1.5613], device='cuda:6'), covar=tensor([0.0957, 0.1774, 0.0994, 0.0971, 0.1553, 0.0297, 0.1466, 0.1697], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:33:02,675 INFO [finetune.py:976] (6/7) Epoch 22, batch 1650, loss[loss=0.138, simple_loss=0.2139, pruned_loss=0.03111, over 4823.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2464, pruned_loss=0.05254, over 959121.55 frames. ], batch size: 41, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:33:13,730 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=121950.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:33:22,466 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.553e+02 1.837e+02 2.107e+02 3.976e+02, threshold=3.675e+02, percent-clipped=1.0 2023-03-27 02:33:33,822 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=121967.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 02:33:46,461 INFO [finetune.py:976] (6/7) Epoch 22, batch 1700, loss[loss=0.1834, simple_loss=0.2449, pruned_loss=0.06096, over 4832.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2432, pruned_loss=0.05143, over 957951.27 frames. ], batch size: 33, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:33:46,601 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2079, 2.0380, 1.6174, 1.7422, 2.1155, 1.8395, 2.2523, 2.1309], device='cuda:6'), covar=tensor([0.1309, 0.1804, 0.3059, 0.2664, 0.2491, 0.1656, 0.2868, 0.1682], device='cuda:6'), in_proj_covar=tensor([0.0187, 0.0188, 0.0236, 0.0255, 0.0248, 0.0204, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:34:13,884 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122011.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:34:16,846 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122015.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:34:36,869 INFO [finetune.py:976] (6/7) Epoch 22, batch 1750, loss[loss=0.2395, simple_loss=0.324, pruned_loss=0.07749, over 4860.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2448, pruned_loss=0.05234, over 956466.85 frames. ], batch size: 44, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:34:54,091 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0608, 1.7366, 2.4500, 3.9385, 2.7272, 2.7633, 1.1015, 3.3415], device='cuda:6'), covar=tensor([0.1640, 0.1484, 0.1358, 0.0486, 0.0734, 0.1535, 0.1801, 0.0382], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0165, 0.0100, 0.0137, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 02:35:06,509 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.629e+02 1.889e+02 2.189e+02 5.095e+02, threshold=3.778e+02, percent-clipped=2.0 2023-03-27 02:35:08,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4705, 1.2198, 1.7541, 1.7183, 1.3743, 3.0552, 1.0718, 1.2929], device='cuda:6'), covar=tensor([0.1075, 0.2375, 0.1556, 0.1111, 0.1942, 0.0319, 0.2223, 0.2441], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0076, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:35:10,496 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122063.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:35:10,594 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.72 vs. limit=5.0 2023-03-27 02:35:22,847 INFO [finetune.py:976] (6/7) Epoch 22, batch 1800, loss[loss=0.12, simple_loss=0.1882, pruned_loss=0.02593, over 4699.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2467, pruned_loss=0.05256, over 956169.28 frames. ], batch size: 23, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:35:41,158 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122111.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:35:46,358 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122117.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:35:54,952 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 02:35:56,275 INFO [finetune.py:976] (6/7) Epoch 22, batch 1850, loss[loss=0.1928, simple_loss=0.2675, pruned_loss=0.059, over 4844.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2487, pruned_loss=0.05318, over 955180.83 frames. ], batch size: 47, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:36:08,742 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 02:36:11,041 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5150, 1.3618, 1.9982, 1.7249, 1.6238, 3.2589, 1.3212, 1.5346], device='cuda:6'), covar=tensor([0.0981, 0.1888, 0.1223, 0.0968, 0.1507, 0.0276, 0.1575, 0.1753], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:36:12,755 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.044e+01 1.577e+02 1.909e+02 2.256e+02 5.766e+02, threshold=3.818e+02, percent-clipped=1.0 2023-03-27 02:36:16,385 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6349, 1.1063, 0.8196, 1.5324, 2.0758, 1.1861, 1.3363, 1.5729], device='cuda:6'), covar=tensor([0.1409, 0.2090, 0.1887, 0.1164, 0.1789, 0.1869, 0.1457, 0.1826], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0093, 0.0109, 0.0091, 0.0118, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 02:36:27,293 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122178.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 02:36:29,612 INFO [finetune.py:976] (6/7) Epoch 22, batch 1900, loss[loss=0.1618, simple_loss=0.2374, pruned_loss=0.04311, over 4906.00 frames. ], tot_loss[loss=0.1783, simple_loss=0.25, pruned_loss=0.05331, over 954580.33 frames. ], batch size: 38, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:36:37,648 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122195.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:36:43,862 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2880, 1.2586, 1.5283, 1.0891, 1.2936, 1.4857, 1.2332, 1.6447], device='cuda:6'), covar=tensor([0.1155, 0.2164, 0.1227, 0.1387, 0.0949, 0.1184, 0.3066, 0.0734], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0207, 0.0193, 0.0191, 0.0176, 0.0214, 0.0219, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:36:51,580 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 02:37:03,560 INFO [finetune.py:976] (6/7) Epoch 22, batch 1950, loss[loss=0.1703, simple_loss=0.2272, pruned_loss=0.05663, over 4806.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2487, pruned_loss=0.05245, over 956073.36 frames. ], batch size: 25, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:37:12,543 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-27 02:37:18,275 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122256.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:37:19,977 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.844e+01 1.567e+02 1.786e+02 2.225e+02 4.203e+02, threshold=3.573e+02, percent-clipped=2.0 2023-03-27 02:37:21,872 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5822, 1.1918, 0.9510, 1.4946, 2.0393, 1.1437, 1.3344, 1.4797], device='cuda:6'), covar=tensor([0.1870, 0.2760, 0.2280, 0.1621, 0.2193, 0.2595, 0.2062, 0.2724], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 02:37:36,887 INFO [finetune.py:976] (6/7) Epoch 22, batch 2000, loss[loss=0.1728, simple_loss=0.2509, pruned_loss=0.04739, over 4792.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2454, pruned_loss=0.05158, over 955962.08 frames. ], batch size: 29, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:37:51,526 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122306.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:37:54,673 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-27 02:37:59,950 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122319.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:38:10,019 INFO [finetune.py:976] (6/7) Epoch 22, batch 2050, loss[loss=0.1193, simple_loss=0.1945, pruned_loss=0.02209, over 4777.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2418, pruned_loss=0.05027, over 956595.04 frames. ], batch size: 23, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:38:15,504 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9606, 1.8021, 2.0322, 1.1653, 1.9701, 1.9822, 2.0391, 1.5992], device='cuda:6'), covar=tensor([0.0488, 0.0598, 0.0506, 0.0759, 0.0591, 0.0570, 0.0476, 0.1044], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0136, 0.0141, 0.0121, 0.0126, 0.0139, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:38:25,170 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3332, 2.1081, 2.2252, 1.5622, 2.1364, 2.2901, 2.3959, 1.7492], device='cuda:6'), covar=tensor([0.0537, 0.0644, 0.0652, 0.0798, 0.0630, 0.0632, 0.0560, 0.1134], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0136, 0.0141, 0.0121, 0.0126, 0.0139, 0.0140, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:38:26,816 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.990e+01 1.410e+02 1.787e+02 2.088e+02 3.673e+02, threshold=3.574e+02, percent-clipped=1.0 2023-03-27 02:38:33,643 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3911, 1.3171, 1.6150, 1.5486, 1.4417, 2.8718, 1.2296, 1.4512], device='cuda:6'), covar=tensor([0.0956, 0.1788, 0.1109, 0.0921, 0.1663, 0.0316, 0.1529, 0.1734], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:38:35,316 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4202, 1.3198, 1.6277, 1.5339, 1.5235, 2.8884, 1.3393, 1.4648], device='cuda:6'), covar=tensor([0.0893, 0.1789, 0.1112, 0.0928, 0.1601, 0.0322, 0.1490, 0.1790], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0092, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:38:43,002 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:38:44,603 INFO [finetune.py:976] (6/7) Epoch 22, batch 2100, loss[loss=0.1522, simple_loss=0.2377, pruned_loss=0.03341, over 4763.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2431, pruned_loss=0.05096, over 957635.04 frames. ], batch size: 26, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:39:28,194 INFO [finetune.py:976] (6/7) Epoch 22, batch 2150, loss[loss=0.2119, simple_loss=0.2804, pruned_loss=0.07171, over 4894.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2453, pruned_loss=0.05181, over 955379.80 frames. ], batch size: 35, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:40:03,770 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.048e+02 1.591e+02 1.910e+02 2.352e+02 5.051e+02, threshold=3.819e+02, percent-clipped=3.0 2023-03-27 02:40:13,691 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:40:16,633 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122473.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 02:40:26,429 INFO [finetune.py:976] (6/7) Epoch 22, batch 2200, loss[loss=0.1645, simple_loss=0.2449, pruned_loss=0.04205, over 4810.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2475, pruned_loss=0.05271, over 956273.80 frames. ], batch size: 45, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:40:43,548 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1131, 1.9160, 1.7717, 1.9303, 1.8165, 1.8453, 1.8704, 2.6509], device='cuda:6'), covar=tensor([0.3188, 0.3905, 0.2923, 0.3609, 0.3974, 0.2207, 0.3712, 0.1468], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0233, 0.0276, 0.0255, 0.0225, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:40:57,262 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122529.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:40:58,944 INFO [finetune.py:976] (6/7) Epoch 22, batch 2250, loss[loss=0.1817, simple_loss=0.266, pruned_loss=0.04867, over 4827.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2483, pruned_loss=0.05265, over 956030.42 frames. ], batch size: 33, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:41:12,977 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122551.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:41:17,776 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.977e+01 1.468e+02 1.830e+02 2.095e+02 3.153e+02, threshold=3.659e+02, percent-clipped=0.0 2023-03-27 02:41:29,398 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4980, 1.5560, 1.7495, 0.9463, 1.7703, 1.8598, 1.9751, 1.5286], device='cuda:6'), covar=tensor([0.0764, 0.0637, 0.0511, 0.0476, 0.0453, 0.0647, 0.0323, 0.0657], device='cuda:6'), in_proj_covar=tensor([0.0124, 0.0151, 0.0128, 0.0125, 0.0133, 0.0132, 0.0144, 0.0150], device='cuda:6'), out_proj_covar=tensor([9.0801e-05, 1.0930e-04, 9.1760e-05, 8.7885e-05, 9.3782e-05, 9.4125e-05, 1.0289e-04, 1.0742e-04], device='cuda:6') 2023-03-27 02:41:29,492 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-27 02:41:31,702 INFO [finetune.py:976] (6/7) Epoch 22, batch 2300, loss[loss=0.1513, simple_loss=0.2271, pruned_loss=0.03772, over 4783.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2481, pruned_loss=0.05223, over 954709.30 frames. ], batch size: 29, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:41:49,477 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122606.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:41:54,531 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 02:42:05,214 INFO [finetune.py:976] (6/7) Epoch 22, batch 2350, loss[loss=0.1446, simple_loss=0.2225, pruned_loss=0.03332, over 4779.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2465, pruned_loss=0.05187, over 955453.05 frames. ], batch size: 29, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:42:21,478 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122654.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:42:24,441 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.047e+02 1.417e+02 1.625e+02 2.018e+02 3.172e+02, threshold=3.250e+02, percent-clipped=0.0 2023-03-27 02:42:34,178 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122675.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:42:38,345 INFO [finetune.py:976] (6/7) Epoch 22, batch 2400, loss[loss=0.1615, simple_loss=0.2305, pruned_loss=0.04625, over 4777.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2442, pruned_loss=0.05134, over 955254.35 frames. ], batch size: 26, lr: 3.15e-03, grad_scale: 64.0 2023-03-27 02:43:11,474 INFO [finetune.py:976] (6/7) Epoch 22, batch 2450, loss[loss=0.1672, simple_loss=0.2313, pruned_loss=0.05152, over 4836.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2406, pruned_loss=0.04992, over 954168.90 frames. ], batch size: 33, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:43:31,104 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.588e+02 1.841e+02 2.130e+02 2.968e+02, threshold=3.682e+02, percent-clipped=0.0 2023-03-27 02:43:39,662 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122773.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 02:43:43,899 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1568, 3.6458, 3.8322, 3.8099, 3.6963, 3.5971, 4.2797, 1.5129], device='cuda:6'), covar=tensor([0.1379, 0.1603, 0.1570, 0.2472, 0.2282, 0.2624, 0.1269, 0.7658], device='cuda:6'), in_proj_covar=tensor([0.0352, 0.0247, 0.0282, 0.0294, 0.0337, 0.0287, 0.0308, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:43:45,024 INFO [finetune.py:976] (6/7) Epoch 22, batch 2500, loss[loss=0.2343, simple_loss=0.3003, pruned_loss=0.0841, over 4829.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2409, pruned_loss=0.05052, over 953259.65 frames. ], batch size: 38, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:44:21,546 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122821.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:44:23,365 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=122824.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:44:28,114 INFO [finetune.py:976] (6/7) Epoch 22, batch 2550, loss[loss=0.1383, simple_loss=0.2133, pruned_loss=0.03168, over 4801.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2443, pruned_loss=0.05173, over 952110.01 frames. ], batch size: 25, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:44:42,563 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122851.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:44:53,793 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.164e+02 1.566e+02 1.889e+02 2.331e+02 3.878e+02, threshold=3.777e+02, percent-clipped=1.0 2023-03-27 02:45:11,729 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=122872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:45:12,348 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5131, 1.0955, 0.8150, 1.3471, 1.9323, 0.8066, 1.3042, 1.3911], device='cuda:6'), covar=tensor([0.1554, 0.2225, 0.1861, 0.1268, 0.2004, 0.1954, 0.1559, 0.2016], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0091, 0.0119, 0.0094, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 02:45:22,082 INFO [finetune.py:976] (6/7) Epoch 22, batch 2600, loss[loss=0.2506, simple_loss=0.3108, pruned_loss=0.09515, over 4897.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2467, pruned_loss=0.05258, over 951436.35 frames. ], batch size: 35, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:45:40,404 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=122899.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:45:57,328 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1929, 2.2109, 1.6975, 2.2616, 2.0237, 1.8327, 2.4477, 2.2382], device='cuda:6'), covar=tensor([0.1455, 0.2052, 0.3052, 0.2717, 0.2781, 0.1765, 0.4051, 0.1742], device='cuda:6'), in_proj_covar=tensor([0.0187, 0.0188, 0.0235, 0.0254, 0.0248, 0.0204, 0.0213, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:46:02,682 INFO [finetune.py:976] (6/7) Epoch 22, batch 2650, loss[loss=0.2297, simple_loss=0.289, pruned_loss=0.08516, over 4888.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2476, pruned_loss=0.05256, over 952191.11 frames. ], batch size: 32, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:46:03,411 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=122933.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:46:08,056 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.75 vs. limit=2.0 2023-03-27 02:46:19,588 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.898e+01 1.493e+02 1.785e+02 2.135e+02 3.458e+02, threshold=3.570e+02, percent-clipped=0.0 2023-03-27 02:46:31,723 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=122975.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:46:35,902 INFO [finetune.py:976] (6/7) Epoch 22, batch 2700, loss[loss=0.1324, simple_loss=0.2153, pruned_loss=0.02473, over 4883.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.247, pruned_loss=0.05172, over 952674.13 frames. ], batch size: 43, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:46:54,481 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 02:47:04,317 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=123023.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:47:09,737 INFO [finetune.py:976] (6/7) Epoch 22, batch 2750, loss[loss=0.2074, simple_loss=0.2743, pruned_loss=0.07025, over 4849.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2449, pruned_loss=0.05155, over 954399.68 frames. ], batch size: 47, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:47:26,634 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.465e+02 1.704e+02 2.045e+02 3.535e+02, threshold=3.409e+02, percent-clipped=0.0 2023-03-27 02:47:36,191 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.81 vs. limit=5.0 2023-03-27 02:47:42,976 INFO [finetune.py:976] (6/7) Epoch 22, batch 2800, loss[loss=0.1569, simple_loss=0.2307, pruned_loss=0.04155, over 4918.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2413, pruned_loss=0.05007, over 954013.07 frames. ], batch size: 36, lr: 3.14e-03, grad_scale: 64.0 2023-03-27 02:48:11,381 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=123124.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:48:16,586 INFO [finetune.py:976] (6/7) Epoch 22, batch 2850, loss[loss=0.1847, simple_loss=0.2513, pruned_loss=0.05908, over 4760.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2409, pruned_loss=0.0505, over 954991.56 frames. ], batch size: 26, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:48:33,488 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.606e+02 1.919e+02 2.375e+02 6.875e+02, threshold=3.839e+02, percent-clipped=7.0 2023-03-27 02:48:42,230 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=123172.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:48:49,681 INFO [finetune.py:976] (6/7) Epoch 22, batch 2900, loss[loss=0.2236, simple_loss=0.2931, pruned_loss=0.07708, over 4834.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2439, pruned_loss=0.05175, over 954732.75 frames. ], batch size: 51, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:49:22,651 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=123228.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:49:25,029 INFO [finetune.py:976] (6/7) Epoch 22, batch 2950, loss[loss=0.1858, simple_loss=0.2516, pruned_loss=0.05998, over 4793.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.247, pruned_loss=0.05262, over 955212.40 frames. ], batch size: 25, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:49:42,387 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.515e+01 1.556e+02 1.822e+02 2.269e+02 3.192e+02, threshold=3.643e+02, percent-clipped=0.0 2023-03-27 02:49:59,810 INFO [finetune.py:976] (6/7) Epoch 22, batch 3000, loss[loss=0.2094, simple_loss=0.2764, pruned_loss=0.07123, over 4852.00 frames. ], tot_loss[loss=0.1785, simple_loss=0.2495, pruned_loss=0.05375, over 953209.94 frames. ], batch size: 44, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:49:59,811 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 02:50:02,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7920, 3.2979, 3.2476, 1.5209, 3.4149, 2.7733, 1.0500, 2.4883], device='cuda:6'), covar=tensor([0.1690, 0.1787, 0.1482, 0.3090, 0.1076, 0.0900, 0.3232, 0.1428], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0176, 0.0158, 0.0129, 0.0160, 0.0122, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 02:50:15,177 INFO [finetune.py:1010] (6/7) Epoch 22, validation: loss=0.1575, simple_loss=0.2256, pruned_loss=0.04471, over 2265189.00 frames. 2023-03-27 02:50:15,178 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6385MB 2023-03-27 02:50:23,080 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-27 02:50:47,969 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=123314.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:50:56,093 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0876, 2.1606, 1.7718, 2.2025, 2.6209, 2.1033, 2.0035, 1.6012], device='cuda:6'), covar=tensor([0.2226, 0.1900, 0.1850, 0.1607, 0.1804, 0.1230, 0.2156, 0.1946], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0211, 0.0213, 0.0196, 0.0243, 0.0190, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:51:05,012 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8056, 1.9944, 1.6706, 2.1194, 2.3786, 2.0099, 1.7047, 1.4950], device='cuda:6'), covar=tensor([0.2413, 0.1951, 0.2103, 0.1626, 0.1786, 0.1286, 0.2538, 0.2169], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0211, 0.0213, 0.0196, 0.0243, 0.0190, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:51:07,173 INFO [finetune.py:976] (6/7) Epoch 22, batch 3050, loss[loss=0.1557, simple_loss=0.2384, pruned_loss=0.03646, over 4735.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2507, pruned_loss=0.05401, over 954482.76 frames. ], batch size: 54, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:51:27,199 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.922e+01 1.647e+02 1.960e+02 2.377e+02 4.726e+02, threshold=3.920e+02, percent-clipped=6.0 2023-03-27 02:51:36,580 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=123375.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:51:41,107 INFO [finetune.py:976] (6/7) Epoch 22, batch 3100, loss[loss=0.1518, simple_loss=0.2292, pruned_loss=0.0372, over 4751.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2492, pruned_loss=0.05347, over 954545.14 frames. ], batch size: 27, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:51:59,613 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.24 vs. limit=5.0 2023-03-27 02:52:12,234 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=123428.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:52:14,577 INFO [finetune.py:976] (6/7) Epoch 22, batch 3150, loss[loss=0.148, simple_loss=0.2232, pruned_loss=0.03635, over 4796.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2459, pruned_loss=0.05246, over 954634.68 frames. ], batch size: 29, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:52:34,407 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.134e+02 1.485e+02 1.808e+02 2.093e+02 3.688e+02, threshold=3.617e+02, percent-clipped=0.0 2023-03-27 02:52:43,967 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 02:52:47,863 INFO [finetune.py:976] (6/7) Epoch 22, batch 3200, loss[loss=0.1914, simple_loss=0.259, pruned_loss=0.06187, over 4822.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2426, pruned_loss=0.05135, over 953097.01 frames. ], batch size: 40, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:52:52,848 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=123489.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:52:55,897 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8918, 3.4408, 3.6324, 3.7855, 3.6583, 3.4382, 3.9866, 1.2882], device='cuda:6'), covar=tensor([0.1001, 0.0858, 0.0820, 0.1115, 0.1506, 0.1680, 0.0856, 0.5622], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0281, 0.0291, 0.0336, 0.0286, 0.0306, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:53:17,012 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-27 02:53:18,961 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=123528.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:53:21,318 INFO [finetune.py:976] (6/7) Epoch 22, batch 3250, loss[loss=0.1774, simple_loss=0.2559, pruned_loss=0.04941, over 4902.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2436, pruned_loss=0.05172, over 953692.77 frames. ], batch size: 35, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:53:41,211 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.285e+01 1.537e+02 1.740e+02 2.121e+02 3.629e+02, threshold=3.481e+02, percent-clipped=1.0 2023-03-27 02:53:51,149 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=123576.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:53:54,733 INFO [finetune.py:976] (6/7) Epoch 22, batch 3300, loss[loss=0.1862, simple_loss=0.2532, pruned_loss=0.05955, over 4905.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2478, pruned_loss=0.05326, over 952960.57 frames. ], batch size: 37, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:53:56,098 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9916, 1.9916, 1.6338, 1.9269, 1.8458, 1.7831, 1.8753, 2.5554], device='cuda:6'), covar=tensor([0.4001, 0.4251, 0.3407, 0.3869, 0.3876, 0.2639, 0.3642, 0.1752], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0233, 0.0276, 0.0254, 0.0225, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:54:16,135 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0090, 1.9766, 1.6134, 1.9007, 1.7885, 1.7674, 1.8533, 2.5385], device='cuda:6'), covar=tensor([0.3895, 0.3952, 0.3463, 0.3772, 0.4196, 0.2473, 0.3857, 0.1719], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0233, 0.0276, 0.0254, 0.0225, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:54:28,302 INFO [finetune.py:976] (6/7) Epoch 22, batch 3350, loss[loss=0.1243, simple_loss=0.1996, pruned_loss=0.02447, over 4708.00 frames. ], tot_loss[loss=0.1766, simple_loss=0.2485, pruned_loss=0.05235, over 954242.71 frames. ], batch size: 23, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:54:34,276 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 02:54:47,671 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.106e+02 1.542e+02 1.809e+02 2.055e+02 5.285e+02, threshold=3.617e+02, percent-clipped=2.0 2023-03-27 02:54:54,269 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=123670.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:54:57,967 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6861, 1.3609, 2.0455, 1.9248, 1.6417, 3.4378, 1.5110, 1.5805], device='cuda:6'), covar=tensor([0.0829, 0.1786, 0.1079, 0.0891, 0.1521, 0.0219, 0.1376, 0.1685], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0077, 0.0091, 0.0081, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:55:01,512 INFO [finetune.py:976] (6/7) Epoch 22, batch 3400, loss[loss=0.1527, simple_loss=0.2319, pruned_loss=0.03669, over 4801.00 frames. ], tot_loss[loss=0.1781, simple_loss=0.2499, pruned_loss=0.05315, over 954524.18 frames. ], batch size: 51, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:55:09,696 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5604, 1.4493, 1.4419, 1.4300, 0.9098, 2.3945, 0.8498, 1.2702], device='cuda:6'), covar=tensor([0.3164, 0.2507, 0.2279, 0.2488, 0.1923, 0.0345, 0.2639, 0.1339], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0117, 0.0122, 0.0125, 0.0114, 0.0096, 0.0095, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:55:54,857 INFO [finetune.py:976] (6/7) Epoch 22, batch 3450, loss[loss=0.1614, simple_loss=0.2197, pruned_loss=0.05152, over 4386.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.249, pruned_loss=0.05267, over 954741.40 frames. ], batch size: 19, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:56:26,820 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.545e+02 1.898e+02 2.347e+02 3.548e+02, threshold=3.797e+02, percent-clipped=0.0 2023-03-27 02:56:45,232 INFO [finetune.py:976] (6/7) Epoch 22, batch 3500, loss[loss=0.1641, simple_loss=0.2315, pruned_loss=0.04838, over 4863.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.247, pruned_loss=0.0521, over 952887.05 frames. ], batch size: 31, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:56:46,502 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=123784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:57:18,481 INFO [finetune.py:976] (6/7) Epoch 22, batch 3550, loss[loss=0.1922, simple_loss=0.242, pruned_loss=0.07125, over 4819.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.244, pruned_loss=0.05119, over 953395.33 frames. ], batch size: 39, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:57:19,259 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3876, 2.3607, 1.9195, 2.4649, 2.2755, 2.2496, 2.2433, 3.0981], device='cuda:6'), covar=tensor([0.3876, 0.4825, 0.3594, 0.4280, 0.4329, 0.2632, 0.4194, 0.1752], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0264, 0.0235, 0.0277, 0.0255, 0.0226, 0.0255, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 02:57:36,084 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.884e+01 1.429e+02 1.766e+02 2.143e+02 3.754e+02, threshold=3.531e+02, percent-clipped=0.0 2023-03-27 02:57:45,447 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=123872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:57:52,358 INFO [finetune.py:976] (6/7) Epoch 22, batch 3600, loss[loss=0.1734, simple_loss=0.2296, pruned_loss=0.05863, over 4744.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2414, pruned_loss=0.05106, over 953573.99 frames. ], batch size: 23, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:57:53,430 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 02:58:25,719 INFO [finetune.py:976] (6/7) Epoch 22, batch 3650, loss[loss=0.1805, simple_loss=0.2625, pruned_loss=0.04926, over 4817.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2435, pruned_loss=0.0519, over 953070.98 frames. ], batch size: 40, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:58:26,499 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=123933.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:58:43,171 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.106e+01 1.598e+02 1.925e+02 2.525e+02 5.508e+02, threshold=3.851e+02, percent-clipped=5.0 2023-03-27 02:58:49,859 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=123970.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:58:59,579 INFO [finetune.py:976] (6/7) Epoch 22, batch 3700, loss[loss=0.168, simple_loss=0.2342, pruned_loss=0.05085, over 4170.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2469, pruned_loss=0.05289, over 954450.28 frames. ], batch size: 18, lr: 3.14e-03, grad_scale: 32.0 2023-03-27 02:59:07,030 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4026, 1.3686, 1.9669, 1.7317, 1.6354, 3.4622, 1.3756, 1.5094], device='cuda:6'), covar=tensor([0.1003, 0.1824, 0.1108, 0.1004, 0.1565, 0.0237, 0.1509, 0.1808], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 02:59:23,317 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124018.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:59:34,574 INFO [finetune.py:976] (6/7) Epoch 22, batch 3750, loss[loss=0.1487, simple_loss=0.2231, pruned_loss=0.03712, over 4759.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2478, pruned_loss=0.05282, over 953295.59 frames. ], batch size: 28, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 02:59:37,142 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124036.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:59:40,563 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 02:59:51,701 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.170e+02 1.541e+02 1.786e+02 2.095e+02 2.976e+02, threshold=3.572e+02, percent-clipped=0.0 2023-03-27 02:59:53,025 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124062.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 02:59:53,630 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7311, 1.2526, 0.8874, 1.5824, 1.9986, 1.4010, 1.5191, 1.5853], device='cuda:6'), covar=tensor([0.1419, 0.2038, 0.1956, 0.1138, 0.2021, 0.2010, 0.1400, 0.1919], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0121, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 02:59:58,297 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6713, 1.2393, 0.9070, 1.5350, 1.9798, 1.0062, 1.5081, 1.5852], device='cuda:6'), covar=tensor([0.1465, 0.2094, 0.1876, 0.1182, 0.1989, 0.2060, 0.1357, 0.1970], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0111, 0.0092, 0.0121, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:00:03,081 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6848, 2.5363, 2.1634, 1.2192, 2.2577, 2.0042, 1.8866, 2.3476], device='cuda:6'), covar=tensor([0.0876, 0.0696, 0.1403, 0.1935, 0.1311, 0.1961, 0.2115, 0.0893], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0191, 0.0197, 0.0181, 0.0209, 0.0205, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:00:06,948 INFO [finetune.py:976] (6/7) Epoch 22, batch 3800, loss[loss=0.163, simple_loss=0.2269, pruned_loss=0.04957, over 4923.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2492, pruned_loss=0.05325, over 953210.51 frames. ], batch size: 33, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:00:08,719 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124084.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:17,170 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124097.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:29,396 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6291, 0.7014, 1.8019, 1.6234, 1.5430, 1.4573, 1.5195, 1.7507], device='cuda:6'), covar=tensor([0.3492, 0.3630, 0.2781, 0.3131, 0.4046, 0.3319, 0.3806, 0.2644], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0244, 0.0265, 0.0286, 0.0285, 0.0260, 0.0295, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:00:39,631 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124123.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:00:51,385 INFO [finetune.py:976] (6/7) Epoch 22, batch 3850, loss[loss=0.1446, simple_loss=0.2298, pruned_loss=0.02973, over 4898.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2469, pruned_loss=0.05178, over 954988.81 frames. ], batch size: 37, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:00:51,453 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124132.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:01:20,790 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.486e+02 1.746e+02 2.060e+02 3.550e+02, threshold=3.491e+02, percent-clipped=0.0 2023-03-27 03:01:27,145 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 03:01:46,862 INFO [finetune.py:976] (6/7) Epoch 22, batch 3900, loss[loss=0.1449, simple_loss=0.2209, pruned_loss=0.03448, over 4905.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2442, pruned_loss=0.05117, over 955544.20 frames. ], batch size: 43, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:02:09,020 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1173, 2.0041, 1.6789, 1.7390, 1.8597, 1.7992, 1.9209, 2.6132], device='cuda:6'), covar=tensor([0.3361, 0.3952, 0.3112, 0.3504, 0.3561, 0.2377, 0.3618, 0.1578], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0233, 0.0276, 0.0254, 0.0224, 0.0253, 0.0234], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:02:13,820 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9209, 4.8061, 4.5388, 2.6715, 4.9320, 3.6996, 1.0472, 3.4066], device='cuda:6'), covar=tensor([0.2400, 0.1574, 0.1423, 0.3071, 0.0819, 0.0819, 0.4601, 0.1462], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0178, 0.0160, 0.0130, 0.0162, 0.0123, 0.0148, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 03:02:17,917 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124228.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:02:20,285 INFO [finetune.py:976] (6/7) Epoch 22, batch 3950, loss[loss=0.151, simple_loss=0.2194, pruned_loss=0.0413, over 4800.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2414, pruned_loss=0.05052, over 953248.33 frames. ], batch size: 29, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:02:29,895 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 03:02:34,484 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-27 03:02:39,966 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.148e+02 1.559e+02 1.897e+02 2.284e+02 3.853e+02, threshold=3.794e+02, percent-clipped=3.0 2023-03-27 03:02:47,335 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:02:53,670 INFO [finetune.py:976] (6/7) Epoch 22, batch 4000, loss[loss=0.1664, simple_loss=0.2441, pruned_loss=0.04437, over 4803.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.241, pruned_loss=0.05086, over 952871.74 frames. ], batch size: 45, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:03:26,972 INFO [finetune.py:976] (6/7) Epoch 22, batch 4050, loss[loss=0.1627, simple_loss=0.2281, pruned_loss=0.04866, over 4711.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2455, pruned_loss=0.05222, over 954210.21 frames. ], batch size: 23, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:03:27,696 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124333.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:03:39,275 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124348.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:03:46,866 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.585e+02 1.909e+02 2.213e+02 3.315e+02, threshold=3.818e+02, percent-clipped=0.0 2023-03-27 03:04:00,188 INFO [finetune.py:976] (6/7) Epoch 22, batch 4100, loss[loss=0.1381, simple_loss=0.2023, pruned_loss=0.03695, over 3927.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2461, pruned_loss=0.05193, over 954288.31 frames. ], batch size: 17, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:04:07,293 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124392.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:04:17,138 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1186, 1.1713, 1.4139, 1.2876, 1.3829, 2.4457, 1.2169, 1.3401], device='cuda:6'), covar=tensor([0.1053, 0.1900, 0.1064, 0.1043, 0.1722, 0.0404, 0.1593, 0.1969], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:04:19,544 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124409.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:04:24,894 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124418.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:04:30,938 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4745, 1.3270, 1.4955, 0.8454, 1.4887, 1.5415, 1.5037, 1.3040], device='cuda:6'), covar=tensor([0.0573, 0.0827, 0.0687, 0.0943, 0.0904, 0.0687, 0.0638, 0.1264], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0135, 0.0139, 0.0120, 0.0125, 0.0137, 0.0138, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:04:33,281 INFO [finetune.py:976] (6/7) Epoch 22, batch 4150, loss[loss=0.2103, simple_loss=0.2755, pruned_loss=0.07257, over 4802.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.248, pruned_loss=0.05207, over 955541.09 frames. ], batch size: 45, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:04:33,414 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:04:53,595 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.819e+01 1.555e+02 1.746e+02 2.178e+02 5.076e+02, threshold=3.492e+02, percent-clipped=2.0 2023-03-27 03:05:06,935 INFO [finetune.py:976] (6/7) Epoch 22, batch 4200, loss[loss=0.1601, simple_loss=0.2371, pruned_loss=0.04152, over 4928.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.248, pruned_loss=0.05177, over 954408.85 frames. ], batch size: 33, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:05:14,725 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:05:30,236 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8020, 1.7476, 1.5383, 1.4138, 1.8679, 1.5748, 1.8481, 1.8107], device='cuda:6'), covar=tensor([0.1336, 0.1711, 0.2605, 0.2328, 0.2265, 0.1574, 0.2629, 0.1635], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0189, 0.0236, 0.0254, 0.0248, 0.0204, 0.0216, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:05:40,293 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124528.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:05:42,631 INFO [finetune.py:976] (6/7) Epoch 22, batch 4250, loss[loss=0.1751, simple_loss=0.2264, pruned_loss=0.06192, over 4895.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2464, pruned_loss=0.05125, over 956356.89 frames. ], batch size: 35, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:06:02,065 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.200e+01 1.490e+02 1.704e+02 2.072e+02 3.423e+02, threshold=3.408e+02, percent-clipped=0.0 2023-03-27 03:06:12,355 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124576.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:06:17,862 INFO [finetune.py:976] (6/7) Epoch 22, batch 4300, loss[loss=0.1975, simple_loss=0.2649, pruned_loss=0.06505, over 4900.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2451, pruned_loss=0.05084, over 958103.74 frames. ], batch size: 36, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:06:57,354 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1057, 1.8503, 2.4284, 1.5773, 2.0393, 2.4197, 1.7837, 2.4399], device='cuda:6'), covar=tensor([0.1298, 0.2001, 0.1390, 0.1901, 0.1060, 0.1404, 0.2599, 0.0913], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0205, 0.0190, 0.0189, 0.0173, 0.0214, 0.0215, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:06:59,176 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6195, 1.6519, 1.3645, 1.5453, 1.9622, 1.8134, 1.6072, 1.4016], device='cuda:6'), covar=tensor([0.0326, 0.0313, 0.0600, 0.0324, 0.0196, 0.0557, 0.0303, 0.0443], device='cuda:6'), in_proj_covar=tensor([0.0097, 0.0105, 0.0141, 0.0110, 0.0097, 0.0109, 0.0099, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.5110e-05, 8.0744e-05, 1.1031e-04, 8.4009e-05, 7.5523e-05, 8.0816e-05, 7.3622e-05, 8.3817e-05], device='cuda:6') 2023-03-27 03:07:10,855 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124628.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:07:17,351 INFO [finetune.py:976] (6/7) Epoch 22, batch 4350, loss[loss=0.1403, simple_loss=0.2172, pruned_loss=0.03172, over 4788.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2411, pruned_loss=0.04959, over 958697.07 frames. ], batch size: 29, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:07:21,111 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124638.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:07:26,824 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.98 vs. limit=2.0 2023-03-27 03:07:48,793 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.554e+01 1.414e+02 1.739e+02 2.148e+02 3.782e+02, threshold=3.479e+02, percent-clipped=2.0 2023-03-27 03:07:51,290 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124663.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:06,294 INFO [finetune.py:976] (6/7) Epoch 22, batch 4400, loss[loss=0.1819, simple_loss=0.2583, pruned_loss=0.05277, over 4912.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2416, pruned_loss=0.04978, over 958342.43 frames. ], batch size: 37, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:08:12,516 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124692.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:17,333 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124699.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:20,768 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124704.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:08:31,135 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124718.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:35,343 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124724.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:40,102 INFO [finetune.py:976] (6/7) Epoch 22, batch 4450, loss[loss=0.1717, simple_loss=0.2454, pruned_loss=0.04897, over 4922.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2445, pruned_loss=0.05039, over 958251.87 frames. ], batch size: 38, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:08:43,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2683, 2.9097, 3.0503, 3.1896, 3.0564, 2.8574, 3.3291, 0.9463], device='cuda:6'), covar=tensor([0.1134, 0.1021, 0.1038, 0.1210, 0.1672, 0.1885, 0.1133, 0.5874], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0280, 0.0292, 0.0335, 0.0285, 0.0305, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:08:45,009 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:08:58,178 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.229e+02 1.653e+02 1.910e+02 2.206e+02 5.425e+02, threshold=3.820e+02, percent-clipped=3.0 2023-03-27 03:09:02,350 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124766.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:09:13,854 INFO [finetune.py:976] (6/7) Epoch 22, batch 4500, loss[loss=0.1332, simple_loss=0.2002, pruned_loss=0.03312, over 4751.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2463, pruned_loss=0.05142, over 957306.68 frames. ], batch size: 23, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:09:17,603 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:09:38,647 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=124820.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:09:47,293 INFO [finetune.py:976] (6/7) Epoch 22, batch 4550, loss[loss=0.1623, simple_loss=0.2494, pruned_loss=0.03755, over 4772.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2481, pruned_loss=0.05178, over 957565.27 frames. ], batch size: 28, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:10:04,843 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.024e+02 1.469e+02 1.742e+02 1.998e+02 4.285e+02, threshold=3.484e+02, percent-clipped=1.0 2023-03-27 03:10:19,608 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=124881.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:10:20,091 INFO [finetune.py:976] (6/7) Epoch 22, batch 4600, loss[loss=0.1697, simple_loss=0.2384, pruned_loss=0.05049, over 4824.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2475, pruned_loss=0.05156, over 955919.44 frames. ], batch size: 30, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:10:50,938 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=124928.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:10:51,040 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.66 vs. limit=5.0 2023-03-27 03:10:53,308 INFO [finetune.py:976] (6/7) Epoch 22, batch 4650, loss[loss=0.2025, simple_loss=0.2658, pruned_loss=0.06959, over 4923.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2455, pruned_loss=0.05111, over 956393.50 frames. ], batch size: 38, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:11:10,703 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.467e+02 1.675e+02 2.008e+02 3.769e+02, threshold=3.350e+02, percent-clipped=1.0 2023-03-27 03:11:21,850 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=124976.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:11:26,384 INFO [finetune.py:976] (6/7) Epoch 22, batch 4700, loss[loss=0.1456, simple_loss=0.2267, pruned_loss=0.03223, over 4837.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2427, pruned_loss=0.05042, over 956946.46 frames. ], batch size: 33, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:11:27,597 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0739, 1.3680, 1.3550, 1.2898, 1.4850, 2.2753, 1.3189, 1.4671], device='cuda:6'), covar=tensor([0.1017, 0.1655, 0.1137, 0.0922, 0.1493, 0.0456, 0.1316, 0.1623], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:11:36,922 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=124994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:11:43,188 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125004.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:11:52,756 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125019.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:12:07,776 INFO [finetune.py:976] (6/7) Epoch 22, batch 4750, loss[loss=0.1384, simple_loss=0.2148, pruned_loss=0.03101, over 4798.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2416, pruned_loss=0.05055, over 957078.39 frames. ], batch size: 29, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:12:16,442 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2774, 1.3662, 1.6875, 1.5162, 1.5418, 2.7879, 1.3040, 1.4707], device='cuda:6'), covar=tensor([0.0991, 0.1734, 0.1104, 0.0910, 0.1475, 0.0333, 0.1467, 0.1668], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:12:30,814 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125052.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:12:39,937 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.109e+02 1.467e+02 1.710e+02 2.084e+02 3.277e+02, threshold=3.420e+02, percent-clipped=0.0 2023-03-27 03:12:50,483 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 03:12:59,254 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9780, 1.8616, 2.0665, 1.4326, 1.8870, 1.9720, 2.1536, 1.6134], device='cuda:6'), covar=tensor([0.0603, 0.0717, 0.0630, 0.0893, 0.0747, 0.0753, 0.0565, 0.1204], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0136, 0.0139, 0.0121, 0.0126, 0.0139, 0.0138, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:13:08,202 INFO [finetune.py:976] (6/7) Epoch 22, batch 4800, loss[loss=0.1552, simple_loss=0.2316, pruned_loss=0.03941, over 4751.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.244, pruned_loss=0.05084, over 956933.72 frames. ], batch size: 27, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:13:10,656 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7486, 2.5214, 2.0551, 2.8133, 2.6587, 2.2929, 3.1337, 2.6350], device='cuda:6'), covar=tensor([0.1384, 0.2310, 0.3124, 0.2586, 0.2732, 0.1776, 0.3108, 0.1948], device='cuda:6'), in_proj_covar=tensor([0.0187, 0.0189, 0.0235, 0.0253, 0.0248, 0.0204, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:13:16,544 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:13:28,009 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7734, 1.8826, 1.9445, 1.1619, 2.0223, 2.1337, 2.2558, 1.7095], device='cuda:6'), covar=tensor([0.0751, 0.0509, 0.0541, 0.0489, 0.0444, 0.0712, 0.0301, 0.0587], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0151, 0.0128, 0.0123, 0.0132, 0.0131, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.9816e-05, 1.0900e-04, 9.1497e-05, 8.6684e-05, 9.2461e-05, 9.3149e-05, 1.0195e-04, 1.0755e-04], device='cuda:6') 2023-03-27 03:13:42,371 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125128.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:13:45,182 INFO [finetune.py:976] (6/7) Epoch 22, batch 4850, loss[loss=0.1842, simple_loss=0.255, pruned_loss=0.05667, over 4906.00 frames. ], tot_loss[loss=0.175, simple_loss=0.247, pruned_loss=0.05148, over 954862.69 frames. ], batch size: 36, lr: 3.13e-03, grad_scale: 64.0 2023-03-27 03:13:47,687 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:14:04,223 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.151e+02 1.665e+02 1.915e+02 2.224e+02 3.844e+02, threshold=3.831e+02, percent-clipped=1.0 2023-03-27 03:14:07,447 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1629, 2.7184, 2.4947, 1.3578, 2.7088, 2.2335, 1.9790, 2.4333], device='cuda:6'), covar=tensor([0.1225, 0.0973, 0.1992, 0.2366, 0.1819, 0.2411, 0.2510, 0.1300], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0198, 0.0182, 0.0210, 0.0207, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:14:12,204 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9193, 1.5845, 2.3462, 3.7476, 2.4411, 2.5827, 1.0887, 3.1347], device='cuda:6'), covar=tensor([0.1685, 0.1541, 0.1392, 0.0518, 0.0873, 0.1887, 0.1847, 0.0440], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0164, 0.0101, 0.0136, 0.0124, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 03:14:14,554 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125176.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:14:18,597 INFO [finetune.py:976] (6/7) Epoch 22, batch 4900, loss[loss=0.1574, simple_loss=0.2428, pruned_loss=0.03598, over 4827.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2492, pruned_loss=0.05237, over 955047.90 frames. ], batch size: 38, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:14:23,442 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125189.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:14:52,064 INFO [finetune.py:976] (6/7) Epoch 22, batch 4950, loss[loss=0.2122, simple_loss=0.2768, pruned_loss=0.07385, over 4852.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2498, pruned_loss=0.05229, over 954577.67 frames. ], batch size: 44, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:14:52,645 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3050, 1.2410, 1.4779, 1.0124, 1.2431, 1.4172, 1.2345, 1.5616], device='cuda:6'), covar=tensor([0.1050, 0.1905, 0.1267, 0.1394, 0.0796, 0.1036, 0.2572, 0.0672], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0206, 0.0191, 0.0190, 0.0174, 0.0214, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:15:04,640 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 03:15:12,017 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.098e+02 1.500e+02 1.887e+02 2.169e+02 5.445e+02, threshold=3.774e+02, percent-clipped=5.0 2023-03-27 03:15:25,141 INFO [finetune.py:976] (6/7) Epoch 22, batch 5000, loss[loss=0.1658, simple_loss=0.2312, pruned_loss=0.05021, over 4783.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2475, pruned_loss=0.05175, over 954111.79 frames. ], batch size: 26, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:15:33,547 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125294.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:15:33,660 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.56 vs. limit=5.0 2023-03-27 03:15:50,213 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125319.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:15:54,516 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3484, 1.4772, 1.4723, 0.7890, 1.4756, 1.6399, 1.7640, 1.3554], device='cuda:6'), covar=tensor([0.0837, 0.0527, 0.0481, 0.0458, 0.0403, 0.0567, 0.0240, 0.0614], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0123, 0.0131, 0.0130, 0.0142, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9530e-05, 1.0839e-04, 9.0869e-05, 8.6484e-05, 9.1786e-05, 9.2647e-05, 1.0143e-04, 1.0698e-04], device='cuda:6') 2023-03-27 03:15:58,103 INFO [finetune.py:976] (6/7) Epoch 22, batch 5050, loss[loss=0.1605, simple_loss=0.2321, pruned_loss=0.04449, over 4820.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2454, pruned_loss=0.05125, over 952854.91 frames. ], batch size: 41, lr: 3.13e-03, grad_scale: 32.0 2023-03-27 03:16:05,260 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125342.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:11,215 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.55 vs. limit=5.0 2023-03-27 03:16:12,320 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 03:16:15,334 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:18,274 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.708e+01 1.493e+02 1.877e+02 2.229e+02 3.838e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-27 03:16:22,417 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125367.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:27,963 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125376.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:16:31,543 INFO [finetune.py:976] (6/7) Epoch 22, batch 5100, loss[loss=0.1607, simple_loss=0.2207, pruned_loss=0.0503, over 4793.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2423, pruned_loss=0.05072, over 953720.32 frames. ], batch size: 51, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:16:55,544 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125417.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:16:58,422 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125421.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:05,016 INFO [finetune.py:976] (6/7) Epoch 22, batch 5150, loss[loss=0.1293, simple_loss=0.1952, pruned_loss=0.03171, over 4723.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.243, pruned_loss=0.05139, over 954317.12 frames. ], batch size: 23, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:17:13,032 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125437.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:33,158 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.242e+02 1.608e+02 1.768e+02 2.290e+02 4.207e+02, threshold=3.536e+02, percent-clipped=1.0 2023-03-27 03:17:45,308 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125476.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:46,257 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.48 vs. limit=5.0 2023-03-27 03:17:47,125 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.6887, 4.0722, 4.2884, 4.5073, 4.4639, 4.1717, 4.8077, 1.6362], device='cuda:6'), covar=tensor([0.0778, 0.0708, 0.0742, 0.0889, 0.1170, 0.1615, 0.0503, 0.5888], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0243, 0.0278, 0.0290, 0.0333, 0.0283, 0.0303, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:17:48,790 INFO [finetune.py:976] (6/7) Epoch 22, batch 5200, loss[loss=0.2139, simple_loss=0.2903, pruned_loss=0.06871, over 4895.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2463, pruned_loss=0.05228, over 953366.23 frames. ], batch size: 35, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:17:53,655 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125482.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:17:54,882 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125484.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:18:39,415 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125524.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:18:44,720 INFO [finetune.py:976] (6/7) Epoch 22, batch 5250, loss[loss=0.169, simple_loss=0.2452, pruned_loss=0.04635, over 4874.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2481, pruned_loss=0.0524, over 951766.07 frames. ], batch size: 31, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:19:03,923 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.075e+02 1.572e+02 1.864e+02 2.118e+02 3.675e+02, threshold=3.728e+02, percent-clipped=1.0 2023-03-27 03:19:14,003 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4845, 1.5122, 1.5282, 0.7479, 1.6053, 1.8315, 1.8892, 1.3757], device='cuda:6'), covar=tensor([0.1093, 0.0798, 0.0596, 0.0669, 0.0471, 0.0676, 0.0322, 0.0863], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0122, 0.0131, 0.0130, 0.0141, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9692e-05, 1.0811e-04, 9.0771e-05, 8.6280e-05, 9.1631e-05, 9.2621e-05, 1.0076e-04, 1.0670e-04], device='cuda:6') 2023-03-27 03:19:18,599 INFO [finetune.py:976] (6/7) Epoch 22, batch 5300, loss[loss=0.2549, simple_loss=0.3116, pruned_loss=0.09906, over 4905.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2505, pruned_loss=0.05394, over 951584.22 frames. ], batch size: 37, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:19:32,543 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125603.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:19:52,427 INFO [finetune.py:976] (6/7) Epoch 22, batch 5350, loss[loss=0.1838, simple_loss=0.2617, pruned_loss=0.05293, over 4834.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2497, pruned_loss=0.05319, over 950498.68 frames. ], batch size: 47, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:19:56,759 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8271, 3.3924, 3.5287, 3.6913, 3.5987, 3.3702, 3.9098, 1.1308], device='cuda:6'), covar=tensor([0.0910, 0.0851, 0.0934, 0.1060, 0.1436, 0.1737, 0.0905, 0.5828], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0243, 0.0278, 0.0290, 0.0334, 0.0284, 0.0303, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:20:10,863 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.759e+01 1.403e+02 1.743e+02 2.180e+02 4.274e+02, threshold=3.486e+02, percent-clipped=1.0 2023-03-27 03:20:13,342 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125664.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:20:20,305 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7999, 1.9540, 1.6634, 1.7529, 2.4124, 2.5074, 2.0278, 1.9812], device='cuda:6'), covar=tensor([0.0446, 0.0356, 0.0586, 0.0350, 0.0252, 0.0497, 0.0304, 0.0376], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0106, 0.0142, 0.0111, 0.0098, 0.0111, 0.0101, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.6236e-05, 8.1679e-05, 1.1147e-04, 8.5250e-05, 7.6479e-05, 8.1841e-05, 7.5044e-05, 8.5003e-05], device='cuda:6') 2023-03-27 03:20:24,047 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0791, 1.5158, 2.1028, 2.0517, 1.8258, 1.7959, 2.0088, 2.0121], device='cuda:6'), covar=tensor([0.3533, 0.3733, 0.3103, 0.3657, 0.4872, 0.3778, 0.4219, 0.2999], device='cuda:6'), in_proj_covar=tensor([0.0260, 0.0244, 0.0265, 0.0287, 0.0285, 0.0262, 0.0295, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:20:25,087 INFO [finetune.py:976] (6/7) Epoch 22, batch 5400, loss[loss=0.2085, simple_loss=0.2656, pruned_loss=0.0757, over 4721.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2469, pruned_loss=0.05261, over 950530.38 frames. ], batch size: 54, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:20:29,031 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 03:20:43,462 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6833, 1.6461, 1.5782, 1.6633, 1.3763, 3.6034, 1.3919, 1.8219], device='cuda:6'), covar=tensor([0.3096, 0.2366, 0.2008, 0.2192, 0.1551, 0.0201, 0.2460, 0.1173], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:20:44,653 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125712.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:20:53,961 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125725.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:20:58,061 INFO [finetune.py:976] (6/7) Epoch 22, batch 5450, loss[loss=0.1586, simple_loss=0.2226, pruned_loss=0.04729, over 4889.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2441, pruned_loss=0.05177, over 952860.51 frames. ], batch size: 32, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:20:58,130 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:17,000 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.470e+02 1.717e+02 2.000e+02 3.602e+02, threshold=3.434e+02, percent-clipped=1.0 2023-03-27 03:21:22,011 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3457, 2.2328, 1.7288, 2.2442, 2.2093, 1.8688, 2.5552, 2.2841], device='cuda:6'), covar=tensor([0.1282, 0.2072, 0.2932, 0.2511, 0.2536, 0.1753, 0.2958, 0.1620], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0189, 0.0236, 0.0254, 0.0249, 0.0205, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:21:27,701 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:31,104 INFO [finetune.py:976] (6/7) Epoch 22, batch 5500, loss[loss=0.2101, simple_loss=0.2583, pruned_loss=0.08097, over 4093.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2418, pruned_loss=0.05116, over 953530.82 frames. ], batch size: 65, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:21:32,414 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=125784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:33,647 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=125786.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:21:36,066 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.8587, 4.2701, 4.4308, 4.6664, 4.6087, 4.4071, 5.0050, 1.5442], device='cuda:6'), covar=tensor([0.0783, 0.0812, 0.0810, 0.0992, 0.1228, 0.1436, 0.0532, 0.5689], device='cuda:6'), in_proj_covar=tensor([0.0345, 0.0241, 0.0276, 0.0288, 0.0331, 0.0281, 0.0301, 0.0296], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:22:04,446 INFO [finetune.py:976] (6/7) Epoch 22, batch 5550, loss[loss=0.179, simple_loss=0.2435, pruned_loss=0.05722, over 4886.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2437, pruned_loss=0.05266, over 951368.88 frames. ], batch size: 32, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:22:04,493 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=125832.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:22:05,769 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7222, 1.2323, 0.9518, 1.5906, 2.0796, 1.4354, 1.5946, 1.6199], device='cuda:6'), covar=tensor([0.1502, 0.2103, 0.1903, 0.1223, 0.1992, 0.1895, 0.1437, 0.1907], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0120, 0.0094, 0.0100, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:22:32,021 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.163e+02 1.619e+02 1.949e+02 2.379e+02 4.295e+02, threshold=3.899e+02, percent-clipped=6.0 2023-03-27 03:22:46,655 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-27 03:22:48,803 INFO [finetune.py:976] (6/7) Epoch 22, batch 5600, loss[loss=0.2034, simple_loss=0.289, pruned_loss=0.05887, over 4837.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2472, pruned_loss=0.0531, over 948787.40 frames. ], batch size: 40, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:23:20,378 INFO [finetune.py:976] (6/7) Epoch 22, batch 5650, loss[loss=0.1645, simple_loss=0.2379, pruned_loss=0.04557, over 4762.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2486, pruned_loss=0.0525, over 949911.64 frames. ], batch size: 27, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:23:27,398 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6795, 1.5502, 1.1295, 0.3865, 1.3775, 1.4969, 1.5203, 1.4089], device='cuda:6'), covar=tensor([0.1011, 0.0792, 0.1422, 0.1929, 0.1366, 0.2546, 0.2400, 0.0917], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0193, 0.0200, 0.0183, 0.0210, 0.0209, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:23:37,282 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9996, 1.2186, 0.6903, 1.8745, 2.2753, 1.6306, 1.6396, 1.6310], device='cuda:6'), covar=tensor([0.1435, 0.2152, 0.2161, 0.1147, 0.1971, 0.1889, 0.1445, 0.1939], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0092, 0.0119, 0.0093, 0.0099, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:23:38,500 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 03:23:46,869 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4037, 1.2537, 0.8086, 0.2950, 1.1315, 1.2828, 1.2837, 1.1483], device='cuda:6'), covar=tensor([0.0707, 0.0745, 0.1109, 0.1495, 0.1126, 0.1771, 0.1777, 0.0716], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0193, 0.0200, 0.0183, 0.0210, 0.0209, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:23:47,405 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=125959.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:23:48,545 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.919e+01 1.591e+02 1.865e+02 2.221e+02 3.612e+02, threshold=3.730e+02, percent-clipped=0.0 2023-03-27 03:24:11,010 INFO [finetune.py:976] (6/7) Epoch 22, batch 5700, loss[loss=0.1392, simple_loss=0.2019, pruned_loss=0.03829, over 4216.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2442, pruned_loss=0.05156, over 933623.11 frames. ], batch size: 18, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:24:20,523 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=125998.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:24:27,155 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2421, 1.3729, 1.4328, 0.8074, 1.3915, 1.6172, 1.6582, 1.3488], device='cuda:6'), covar=tensor([0.0935, 0.0616, 0.0585, 0.0517, 0.0680, 0.0735, 0.0406, 0.0714], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0126, 0.0121, 0.0129, 0.0129, 0.0140, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.8973e-05, 1.0715e-04, 9.0524e-05, 8.5594e-05, 9.0747e-05, 9.1818e-05, 9.9924e-05, 1.0580e-04], device='cuda:6') 2023-03-27 03:24:37,871 INFO [finetune.py:976] (6/7) Epoch 23, batch 0, loss[loss=0.1966, simple_loss=0.2757, pruned_loss=0.05872, over 4824.00 frames. ], tot_loss[loss=0.1966, simple_loss=0.2757, pruned_loss=0.05872, over 4824.00 frames. ], batch size: 47, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:24:37,871 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 03:24:43,658 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6854, 1.6342, 2.1052, 2.9334, 1.9587, 2.3239, 1.0894, 2.4557], device='cuda:6'), covar=tensor([0.1551, 0.1172, 0.1027, 0.0518, 0.0862, 0.1130, 0.1615, 0.0486], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0100, 0.0135, 0.0123, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 03:24:44,668 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9993, 1.7708, 2.0081, 1.3461, 1.9179, 1.9993, 2.0520, 1.6166], device='cuda:6'), covar=tensor([0.0512, 0.0692, 0.0614, 0.0841, 0.0818, 0.0730, 0.0558, 0.1121], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0134, 0.0138, 0.0119, 0.0123, 0.0137, 0.0137, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:24:53,040 INFO [finetune.py:1010] (6/7) Epoch 23, validation: loss=0.1587, simple_loss=0.2268, pruned_loss=0.04533, over 2265189.00 frames. 2023-03-27 03:24:53,040 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 03:24:59,386 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126012.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:25:12,071 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126032.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:30,342 INFO [finetune.py:976] (6/7) Epoch 23, batch 50, loss[loss=0.1637, simple_loss=0.2388, pruned_loss=0.04436, over 4900.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2489, pruned_loss=0.05238, over 217236.22 frames. ], batch size: 43, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:25:30,929 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126059.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:31,915 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126060.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:32,453 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.266e+01 1.545e+02 1.923e+02 2.325e+02 3.929e+02, threshold=3.846e+02, percent-clipped=1.0 2023-03-27 03:25:42,849 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126077.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:44,664 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126080.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:25:45,301 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126081.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:26:03,744 INFO [finetune.py:976] (6/7) Epoch 23, batch 100, loss[loss=0.2034, simple_loss=0.2679, pruned_loss=0.0695, over 4823.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.243, pruned_loss=0.05041, over 381557.62 frames. ], batch size: 38, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:26:11,480 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2051, 1.5175, 0.7855, 2.0155, 2.4858, 1.8689, 1.8583, 1.9069], device='cuda:6'), covar=tensor([0.1266, 0.2016, 0.2041, 0.1104, 0.1796, 0.1795, 0.1403, 0.1984], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0095, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:26:14,892 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126125.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:26:19,893 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6542, 1.4469, 0.9998, 0.2814, 1.1936, 1.4808, 1.4655, 1.3922], device='cuda:6'), covar=tensor([0.0889, 0.0813, 0.1356, 0.1841, 0.1407, 0.2203, 0.2071, 0.0884], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0194, 0.0201, 0.0184, 0.0211, 0.0210, 0.0226, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:26:37,115 INFO [finetune.py:976] (6/7) Epoch 23, batch 150, loss[loss=0.1849, simple_loss=0.2477, pruned_loss=0.06105, over 4836.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2395, pruned_loss=0.05049, over 508388.29 frames. ], batch size: 33, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:26:38,292 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.556e+02 1.791e+02 2.255e+02 5.687e+02, threshold=3.583e+02, percent-clipped=3.0 2023-03-27 03:27:10,672 INFO [finetune.py:976] (6/7) Epoch 23, batch 200, loss[loss=0.1399, simple_loss=0.2094, pruned_loss=0.03517, over 4824.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2387, pruned_loss=0.05095, over 605625.45 frames. ], batch size: 38, lr: 3.12e-03, grad_scale: 32.0 2023-03-27 03:28:05,166 INFO [finetune.py:976] (6/7) Epoch 23, batch 250, loss[loss=0.1413, simple_loss=0.1976, pruned_loss=0.0425, over 4037.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.244, pruned_loss=0.05317, over 681366.68 frames. ], batch size: 17, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:28:05,280 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126259.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:28:06,383 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.143e+02 1.625e+02 1.849e+02 2.180e+02 4.181e+02, threshold=3.697e+02, percent-clipped=1.0 2023-03-27 03:28:12,055 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 03:28:37,017 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126307.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:28:40,577 INFO [finetune.py:976] (6/7) Epoch 23, batch 300, loss[loss=0.2196, simple_loss=0.295, pruned_loss=0.0721, over 4892.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2501, pruned_loss=0.05456, over 743639.93 frames. ], batch size: 35, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:28:43,951 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-27 03:29:18,309 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1463, 1.9549, 2.1311, 1.3542, 2.0740, 2.1421, 2.1602, 1.6872], device='cuda:6'), covar=tensor([0.0524, 0.0636, 0.0611, 0.0885, 0.0690, 0.0686, 0.0584, 0.1092], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0135, 0.0139, 0.0120, 0.0124, 0.0137, 0.0138, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:29:21,207 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126354.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:29:24,662 INFO [finetune.py:976] (6/7) Epoch 23, batch 350, loss[loss=0.2028, simple_loss=0.2696, pruned_loss=0.06798, over 4806.00 frames. ], tot_loss[loss=0.1808, simple_loss=0.2513, pruned_loss=0.05516, over 788960.23 frames. ], batch size: 40, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:29:25,830 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.521e+01 1.514e+02 1.805e+02 2.248e+02 3.946e+02, threshold=3.610e+02, percent-clipped=1.0 2023-03-27 03:29:39,005 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=126380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:29:39,594 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126381.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:29:39,631 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5565, 2.4780, 1.9298, 2.6605, 2.4332, 2.0805, 3.0187, 2.5322], device='cuda:6'), covar=tensor([0.1198, 0.2063, 0.2813, 0.2362, 0.2321, 0.1557, 0.2781, 0.1546], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0255, 0.0250, 0.0206, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:29:59,418 INFO [finetune.py:976] (6/7) Epoch 23, batch 400, loss[loss=0.1683, simple_loss=0.2419, pruned_loss=0.04735, over 4928.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2496, pruned_loss=0.05337, over 825892.50 frames. ], batch size: 33, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:30:21,836 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126429.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:30:29,749 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126441.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:30:33,367 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7175, 1.6225, 1.9879, 1.3069, 1.7451, 2.0187, 1.5859, 2.1503], device='cuda:6'), covar=tensor([0.1244, 0.2206, 0.1313, 0.1824, 0.0948, 0.1175, 0.2804, 0.0793], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0207, 0.0193, 0.0191, 0.0174, 0.0216, 0.0217, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:30:35,191 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=126450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:30:40,549 INFO [finetune.py:976] (6/7) Epoch 23, batch 450, loss[loss=0.2066, simple_loss=0.2626, pruned_loss=0.07533, over 4827.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2482, pruned_loss=0.05242, over 853612.97 frames. ], batch size: 40, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:30:42,257 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.347e+01 1.414e+02 1.639e+02 2.017e+02 3.767e+02, threshold=3.277e+02, percent-clipped=3.0 2023-03-27 03:31:13,830 INFO [finetune.py:976] (6/7) Epoch 23, batch 500, loss[loss=0.1467, simple_loss=0.2242, pruned_loss=0.03456, over 4784.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.245, pruned_loss=0.05142, over 878238.46 frames. ], batch size: 51, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:31:15,634 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:31:30,157 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0272, 1.9489, 1.8106, 2.1159, 1.9539, 4.6617, 1.8864, 2.2066], device='cuda:6'), covar=tensor([0.2931, 0.2286, 0.1953, 0.2043, 0.1302, 0.0106, 0.2178, 0.1148], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0115, 0.0120, 0.0123, 0.0112, 0.0095, 0.0093, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:31:37,454 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 03:31:47,405 INFO [finetune.py:976] (6/7) Epoch 23, batch 550, loss[loss=0.1761, simple_loss=0.2484, pruned_loss=0.05187, over 4823.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2421, pruned_loss=0.05085, over 896568.70 frames. ], batch size: 38, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:31:48,617 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.316e+01 1.588e+02 1.845e+02 2.407e+02 4.330e+02, threshold=3.691e+02, percent-clipped=4.0 2023-03-27 03:32:21,216 INFO [finetune.py:976] (6/7) Epoch 23, batch 600, loss[loss=0.2251, simple_loss=0.2804, pruned_loss=0.08489, over 4018.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2429, pruned_loss=0.05161, over 907769.09 frames. ], batch size: 65, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:33:02,526 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=126654.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:33:03,801 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3897, 1.4503, 1.2185, 1.4583, 1.7074, 1.6188, 1.4378, 1.2780], device='cuda:6'), covar=tensor([0.0354, 0.0278, 0.0640, 0.0266, 0.0187, 0.0437, 0.0296, 0.0373], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0106, 0.0142, 0.0111, 0.0098, 0.0111, 0.0100, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.6385e-05, 8.1232e-05, 1.1124e-04, 8.4848e-05, 7.6203e-05, 8.1763e-05, 7.4752e-05, 8.4512e-05], device='cuda:6') 2023-03-27 03:33:05,530 INFO [finetune.py:976] (6/7) Epoch 23, batch 650, loss[loss=0.1898, simple_loss=0.2665, pruned_loss=0.05654, over 4908.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.247, pruned_loss=0.05327, over 918150.76 frames. ], batch size: 36, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:33:06,762 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.083e+02 1.579e+02 1.902e+02 2.270e+02 1.001e+03, threshold=3.804e+02, percent-clipped=1.0 2023-03-27 03:33:42,521 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=126702.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:33:45,449 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9920, 1.7299, 1.9781, 1.2937, 1.8913, 1.9748, 1.9434, 1.5758], device='cuda:6'), covar=tensor([0.0556, 0.0722, 0.0617, 0.0828, 0.0700, 0.0673, 0.0563, 0.1226], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0134, 0.0138, 0.0119, 0.0124, 0.0136, 0.0136, 0.0160], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:33:47,143 INFO [finetune.py:976] (6/7) Epoch 23, batch 700, loss[loss=0.1882, simple_loss=0.2696, pruned_loss=0.05336, over 4903.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2473, pruned_loss=0.05255, over 924853.73 frames. ], batch size: 35, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:33:55,013 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2659, 2.1325, 1.7425, 2.1942, 2.1741, 1.9391, 2.5136, 2.2586], device='cuda:6'), covar=tensor([0.1338, 0.2006, 0.2981, 0.2596, 0.2515, 0.1739, 0.3027, 0.1662], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0255, 0.0249, 0.0206, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:34:11,553 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126736.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:34:22,633 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.5263, 2.8127, 2.7194, 1.4163, 2.9503, 2.3400, 1.9711, 2.5151], device='cuda:6'), covar=tensor([0.0694, 0.1218, 0.2017, 0.2708, 0.1535, 0.2471, 0.2812, 0.1301], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0192, 0.0198, 0.0182, 0.0209, 0.0209, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:34:29,141 INFO [finetune.py:976] (6/7) Epoch 23, batch 750, loss[loss=0.1483, simple_loss=0.2296, pruned_loss=0.03352, over 4889.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.248, pruned_loss=0.0525, over 929790.92 frames. ], batch size: 32, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:34:30,819 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.063e+01 1.442e+02 1.710e+02 2.057e+02 3.398e+02, threshold=3.419e+02, percent-clipped=0.0 2023-03-27 03:35:00,640 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=126806.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:35:02,395 INFO [finetune.py:976] (6/7) Epoch 23, batch 800, loss[loss=0.1569, simple_loss=0.2379, pruned_loss=0.03792, over 4834.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2478, pruned_loss=0.05221, over 936685.30 frames. ], batch size: 47, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:35:07,275 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=126816.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:35:08,490 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7883, 1.3724, 0.8805, 1.6842, 2.1209, 1.5052, 1.5615, 1.7666], device='cuda:6'), covar=tensor([0.1424, 0.1923, 0.1917, 0.1152, 0.1894, 0.1999, 0.1442, 0.1866], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0094, 0.0099, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:35:44,511 INFO [finetune.py:976] (6/7) Epoch 23, batch 850, loss[loss=0.1613, simple_loss=0.2336, pruned_loss=0.04452, over 4896.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2456, pruned_loss=0.05114, over 942649.98 frames. ], batch size: 32, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:35:45,683 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.329e+01 1.478e+02 1.768e+02 2.024e+02 3.574e+02, threshold=3.536e+02, percent-clipped=1.0 2023-03-27 03:35:56,137 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=126877.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:35:56,367 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 03:36:10,311 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5799, 1.3919, 1.0400, 0.2925, 1.2288, 1.4887, 1.3148, 1.3612], device='cuda:6'), covar=tensor([0.0948, 0.0950, 0.1420, 0.1949, 0.1428, 0.2479, 0.2689, 0.0989], device='cuda:6'), in_proj_covar=tensor([0.0168, 0.0191, 0.0198, 0.0182, 0.0208, 0.0208, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:36:18,421 INFO [finetune.py:976] (6/7) Epoch 23, batch 900, loss[loss=0.1761, simple_loss=0.2486, pruned_loss=0.05174, over 4783.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2427, pruned_loss=0.05035, over 946568.82 frames. ], batch size: 28, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:36:49,240 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.99 vs. limit=2.0 2023-03-27 03:36:52,040 INFO [finetune.py:976] (6/7) Epoch 23, batch 950, loss[loss=0.2007, simple_loss=0.2606, pruned_loss=0.07043, over 4935.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2408, pruned_loss=0.05016, over 947951.26 frames. ], batch size: 33, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:36:53,226 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.057e+02 1.486e+02 1.775e+02 2.132e+02 3.351e+02, threshold=3.551e+02, percent-clipped=0.0 2023-03-27 03:37:26,098 INFO [finetune.py:976] (6/7) Epoch 23, batch 1000, loss[loss=0.2004, simple_loss=0.2777, pruned_loss=0.06157, over 4919.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2443, pruned_loss=0.05148, over 949157.01 frames. ], batch size: 38, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:37:43,502 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127036.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:38:01,445 INFO [finetune.py:976] (6/7) Epoch 23, batch 1050, loss[loss=0.1498, simple_loss=0.2263, pruned_loss=0.03662, over 4744.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2465, pruned_loss=0.05187, over 948106.44 frames. ], batch size: 54, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:38:02,654 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.597e+02 1.830e+02 2.248e+02 5.450e+02, threshold=3.660e+02, percent-clipped=4.0 2023-03-27 03:38:27,255 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127084.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:38:32,731 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 03:38:42,630 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127106.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:38:44,897 INFO [finetune.py:976] (6/7) Epoch 23, batch 1100, loss[loss=0.2009, simple_loss=0.2802, pruned_loss=0.06076, over 4850.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2481, pruned_loss=0.05166, over 950898.08 frames. ], batch size: 44, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:39:14,544 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127154.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:39:21,918 INFO [finetune.py:976] (6/7) Epoch 23, batch 1150, loss[loss=0.1704, simple_loss=0.2438, pruned_loss=0.04856, over 4920.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2494, pruned_loss=0.052, over 951421.48 frames. ], batch size: 38, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:39:23,615 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.547e+02 1.781e+02 2.032e+02 3.739e+02, threshold=3.562e+02, percent-clipped=1.0 2023-03-27 03:39:39,210 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127172.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:39:40,465 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6299, 1.5145, 1.4930, 1.5758, 1.2269, 3.2475, 1.3199, 1.6958], device='cuda:6'), covar=tensor([0.3215, 0.2570, 0.2078, 0.2269, 0.1687, 0.0241, 0.2565, 0.1256], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:40:04,147 INFO [finetune.py:976] (6/7) Epoch 23, batch 1200, loss[loss=0.1991, simple_loss=0.2621, pruned_loss=0.06808, over 4703.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.248, pruned_loss=0.05174, over 952618.89 frames. ], batch size: 54, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:40:13,055 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7859, 1.7316, 1.6070, 1.6997, 1.3190, 3.6977, 1.4733, 1.9022], device='cuda:6'), covar=tensor([0.3192, 0.2321, 0.2081, 0.2409, 0.1617, 0.0193, 0.2410, 0.1220], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:40:33,008 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127249.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:40:33,059 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1403, 1.7467, 2.1961, 2.1034, 1.9158, 1.8927, 2.1129, 2.0538], device='cuda:6'), covar=tensor([0.4404, 0.4156, 0.3081, 0.4150, 0.5213, 0.4039, 0.4824, 0.3098], device='cuda:6'), in_proj_covar=tensor([0.0260, 0.0244, 0.0265, 0.0288, 0.0286, 0.0263, 0.0295, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:40:44,043 INFO [finetune.py:976] (6/7) Epoch 23, batch 1250, loss[loss=0.1588, simple_loss=0.2454, pruned_loss=0.03613, over 4913.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2456, pruned_loss=0.05125, over 953583.94 frames. ], batch size: 36, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:40:45,214 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.489e+02 1.742e+02 2.248e+02 3.707e+02, threshold=3.484e+02, percent-clipped=1.0 2023-03-27 03:41:21,265 INFO [finetune.py:976] (6/7) Epoch 23, batch 1300, loss[loss=0.1594, simple_loss=0.2253, pruned_loss=0.04671, over 4823.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2419, pruned_loss=0.05011, over 952825.30 frames. ], batch size: 30, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:41:22,019 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127310.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:41:40,948 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9504, 2.3335, 1.0435, 2.7635, 3.0853, 2.2675, 2.7835, 2.4222], device='cuda:6'), covar=tensor([0.1097, 0.1756, 0.2115, 0.0961, 0.1415, 0.1450, 0.1128, 0.1804], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:41:40,958 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1879, 2.5468, 2.8388, 2.3497, 2.5735, 4.8876, 2.5577, 2.5073], device='cuda:6'), covar=tensor([0.0863, 0.1402, 0.0893, 0.0946, 0.1274, 0.0141, 0.1044, 0.1493], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0092, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:41:54,394 INFO [finetune.py:976] (6/7) Epoch 23, batch 1350, loss[loss=0.1644, simple_loss=0.2389, pruned_loss=0.04499, over 4815.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2429, pruned_loss=0.05069, over 955133.56 frames. ], batch size: 51, lr: 3.11e-03, grad_scale: 64.0 2023-03-27 03:41:55,607 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.500e+02 1.797e+02 2.250e+02 4.549e+02, threshold=3.594e+02, percent-clipped=1.0 2023-03-27 03:42:27,768 INFO [finetune.py:976] (6/7) Epoch 23, batch 1400, loss[loss=0.1958, simple_loss=0.2762, pruned_loss=0.05773, over 4836.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2451, pruned_loss=0.05087, over 955639.69 frames. ], batch size: 47, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:42:40,737 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127427.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:42:50,481 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6422, 1.7573, 2.2377, 2.0553, 1.9049, 4.2998, 1.6275, 1.7723], device='cuda:6'), covar=tensor([0.0934, 0.1713, 0.1154, 0.0932, 0.1558, 0.0163, 0.1481, 0.1817], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:42:58,110 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8460, 1.7628, 1.5154, 1.9853, 2.3333, 1.9598, 1.6628, 1.4645], device='cuda:6'), covar=tensor([0.2079, 0.1889, 0.1940, 0.1543, 0.1614, 0.1190, 0.2230, 0.1890], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0196, 0.0244, 0.0189, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:43:01,002 INFO [finetune.py:976] (6/7) Epoch 23, batch 1450, loss[loss=0.1837, simple_loss=0.2627, pruned_loss=0.05235, over 4857.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2482, pruned_loss=0.05154, over 956304.88 frames. ], batch size: 31, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:43:03,313 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.817e+01 1.596e+02 1.913e+02 2.194e+02 3.811e+02, threshold=3.827e+02, percent-clipped=1.0 2023-03-27 03:43:09,875 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127472.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:43:09,994 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.64 vs. limit=2.0 2023-03-27 03:43:25,061 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127488.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 03:43:46,711 INFO [finetune.py:976] (6/7) Epoch 23, batch 1500, loss[loss=0.1802, simple_loss=0.2564, pruned_loss=0.052, over 4762.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2503, pruned_loss=0.05232, over 956778.54 frames. ], batch size: 28, lr: 3.11e-03, grad_scale: 32.0 2023-03-27 03:43:52,763 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8813, 1.7542, 2.2364, 1.4504, 2.0259, 2.2374, 1.6541, 2.2843], device='cuda:6'), covar=tensor([0.1168, 0.2015, 0.1184, 0.1687, 0.0858, 0.1098, 0.2677, 0.0879], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0205, 0.0191, 0.0189, 0.0173, 0.0213, 0.0216, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:43:54,390 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127520.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:43:56,274 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3430, 1.5676, 0.8505, 2.1763, 2.5827, 1.8713, 2.0244, 1.8445], device='cuda:6'), covar=tensor([0.1356, 0.2101, 0.1988, 0.1094, 0.1738, 0.1780, 0.1363, 0.2118], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:44:00,481 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127529.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:44:20,427 INFO [finetune.py:976] (6/7) Epoch 23, batch 1550, loss[loss=0.1673, simple_loss=0.2526, pruned_loss=0.04099, over 4758.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2486, pruned_loss=0.05129, over 956893.42 frames. ], batch size: 26, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:44:22,244 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.462e+02 1.755e+02 2.123e+02 3.197e+02, threshold=3.511e+02, percent-clipped=0.0 2023-03-27 03:44:48,010 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127590.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:45:04,791 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127605.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:45:07,683 INFO [finetune.py:976] (6/7) Epoch 23, batch 1600, loss[loss=0.1598, simple_loss=0.2282, pruned_loss=0.04564, over 4823.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2468, pruned_loss=0.05115, over 957048.52 frames. ], batch size: 30, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:45:34,757 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 03:45:40,959 INFO [finetune.py:976] (6/7) Epoch 23, batch 1650, loss[loss=0.1937, simple_loss=0.2677, pruned_loss=0.05989, over 4758.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2441, pruned_loss=0.0503, over 956291.00 frames. ], batch size: 54, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:45:43,333 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.567e+02 1.812e+02 2.212e+02 4.212e+02, threshold=3.624e+02, percent-clipped=4.0 2023-03-27 03:45:43,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127662.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:46:26,409 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1528, 2.8997, 2.7615, 1.1769, 2.9489, 2.2382, 0.6048, 1.9350], device='cuda:6'), covar=tensor([0.2913, 0.2666, 0.1881, 0.3714, 0.1554, 0.1201, 0.4449, 0.1808], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0178, 0.0159, 0.0128, 0.0161, 0.0124, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 03:46:26,937 INFO [finetune.py:976] (6/7) Epoch 23, batch 1700, loss[loss=0.2505, simple_loss=0.3006, pruned_loss=0.1001, over 4839.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2426, pruned_loss=0.05046, over 956314.60 frames. ], batch size: 49, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:46:36,930 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127723.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:47:00,095 INFO [finetune.py:976] (6/7) Epoch 23, batch 1750, loss[loss=0.2417, simple_loss=0.3112, pruned_loss=0.08611, over 4859.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2461, pruned_loss=0.05199, over 957739.82 frames. ], batch size: 49, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:47:01,903 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.127e+02 1.511e+02 1.783e+02 2.157e+02 3.427e+02, threshold=3.565e+02, percent-clipped=0.0 2023-03-27 03:47:13,903 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127779.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:47:16,786 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127783.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 03:47:24,523 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.52 vs. limit=5.0 2023-03-27 03:47:33,988 INFO [finetune.py:976] (6/7) Epoch 23, batch 1800, loss[loss=0.1682, simple_loss=0.2488, pruned_loss=0.04385, over 4863.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2488, pruned_loss=0.0526, over 957519.14 frames. ], batch size: 34, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:47:54,757 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127840.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:01,834 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127850.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:07,745 INFO [finetune.py:976] (6/7) Epoch 23, batch 1850, loss[loss=0.2061, simple_loss=0.2748, pruned_loss=0.06874, over 4793.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2508, pruned_loss=0.05344, over 958457.11 frames. ], batch size: 51, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:48:09,574 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.256e+01 1.480e+02 1.674e+02 2.095e+02 4.093e+02, threshold=3.347e+02, percent-clipped=1.0 2023-03-27 03:48:14,536 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3654, 1.9050, 2.3612, 2.3790, 2.0487, 2.0493, 2.2673, 2.2046], device='cuda:6'), covar=tensor([0.3784, 0.3966, 0.3148, 0.4053, 0.4814, 0.3772, 0.4526, 0.2903], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0243, 0.0263, 0.0287, 0.0285, 0.0262, 0.0295, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:48:16,139 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=127872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:24,941 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=127885.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:29,945 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.69 vs. limit=5.0 2023-03-27 03:48:40,231 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=127905.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:42,627 INFO [finetune.py:976] (6/7) Epoch 23, batch 1900, loss[loss=0.1824, simple_loss=0.2685, pruned_loss=0.04814, over 4917.00 frames. ], tot_loss[loss=0.1792, simple_loss=0.2514, pruned_loss=0.05355, over 957435.14 frames. ], batch size: 42, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:48:43,937 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127911.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:48:58,801 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=127933.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:49:03,526 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7270, 1.5672, 2.0788, 3.1923, 2.1435, 2.2664, 1.0907, 2.6804], device='cuda:6'), covar=tensor([0.1862, 0.1519, 0.1452, 0.0867, 0.0869, 0.1562, 0.1973, 0.0556], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0117, 0.0136, 0.0166, 0.0102, 0.0138, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 03:49:11,965 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=127953.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:49:16,014 INFO [finetune.py:976] (6/7) Epoch 23, batch 1950, loss[loss=0.1637, simple_loss=0.2339, pruned_loss=0.04676, over 4816.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2499, pruned_loss=0.05309, over 955215.14 frames. ], batch size: 38, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:49:16,111 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7035, 1.4566, 2.0227, 1.9111, 1.6400, 3.5348, 1.3806, 1.5365], device='cuda:6'), covar=tensor([0.0935, 0.1737, 0.1144, 0.0914, 0.1608, 0.0249, 0.1444, 0.1871], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0074, 0.0076, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:49:17,848 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.653e+01 1.391e+02 1.764e+02 2.084e+02 3.552e+02, threshold=3.528e+02, percent-clipped=1.0 2023-03-27 03:49:53,065 INFO [finetune.py:976] (6/7) Epoch 23, batch 2000, loss[loss=0.1516, simple_loss=0.2229, pruned_loss=0.04018, over 4825.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2468, pruned_loss=0.052, over 954670.21 frames. ], batch size: 33, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:50:07,585 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128018.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:50:27,218 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9670, 1.8390, 1.7921, 2.0479, 2.4518, 2.0140, 1.8325, 1.7339], device='cuda:6'), covar=tensor([0.1713, 0.1756, 0.1549, 0.1338, 0.1400, 0.1127, 0.2123, 0.1576], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0214, 0.0197, 0.0245, 0.0190, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:50:29,544 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8081, 1.1513, 1.8327, 1.8016, 1.6518, 1.5779, 1.7308, 1.8049], device='cuda:6'), covar=tensor([0.3895, 0.3548, 0.2959, 0.3491, 0.3981, 0.3285, 0.3715, 0.2739], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0243, 0.0264, 0.0288, 0.0286, 0.0262, 0.0295, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:50:38,972 INFO [finetune.py:976] (6/7) Epoch 23, batch 2050, loss[loss=0.1917, simple_loss=0.2664, pruned_loss=0.05851, over 4804.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2447, pruned_loss=0.05179, over 954775.61 frames. ], batch size: 51, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:50:41,272 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.483e+02 1.749e+02 2.101e+02 3.191e+02, threshold=3.498e+02, percent-clipped=0.0 2023-03-27 03:50:42,885 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 03:50:55,171 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128083.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 03:50:56,359 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5106, 1.4915, 1.4349, 1.5468, 0.9770, 3.2341, 1.2151, 1.5830], device='cuda:6'), covar=tensor([0.3413, 0.2643, 0.2112, 0.2308, 0.1926, 0.0236, 0.2702, 0.1331], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0117, 0.0121, 0.0124, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:51:01,045 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3804, 2.2265, 2.7385, 1.9129, 2.3716, 2.7724, 2.1343, 2.7083], device='cuda:6'), covar=tensor([0.1188, 0.1635, 0.1388, 0.1651, 0.0898, 0.1227, 0.2162, 0.0918], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0191, 0.0175, 0.0215, 0.0218, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:51:13,975 INFO [finetune.py:976] (6/7) Epoch 23, batch 2100, loss[loss=0.1999, simple_loss=0.2783, pruned_loss=0.06076, over 4843.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2447, pruned_loss=0.05188, over 953170.51 frames. ], batch size: 47, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:51:40,726 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128131.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 03:51:44,108 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128135.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:00,526 INFO [finetune.py:976] (6/7) Epoch 23, batch 2150, loss[loss=0.1643, simple_loss=0.2375, pruned_loss=0.0455, over 4757.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2466, pruned_loss=0.05236, over 952595.13 frames. ], batch size: 28, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:52:02,379 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.558e+02 1.811e+02 2.178e+02 3.611e+02, threshold=3.622e+02, percent-clipped=2.0 2023-03-27 03:52:17,017 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128185.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:31,850 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128206.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:33,600 INFO [finetune.py:976] (6/7) Epoch 23, batch 2200, loss[loss=0.2073, simple_loss=0.281, pruned_loss=0.06682, over 4908.00 frames. ], tot_loss[loss=0.1784, simple_loss=0.2496, pruned_loss=0.05359, over 952528.27 frames. ], batch size: 37, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:52:46,637 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128228.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:52:47,952 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0337, 2.0072, 1.7210, 1.9360, 1.8308, 1.8799, 1.9201, 2.6025], device='cuda:6'), covar=tensor([0.3784, 0.4131, 0.3127, 0.3848, 0.4280, 0.2370, 0.3678, 0.1628], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0235, 0.0275, 0.0256, 0.0227, 0.0254, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:52:49,641 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128233.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:53:07,264 INFO [finetune.py:976] (6/7) Epoch 23, batch 2250, loss[loss=0.1925, simple_loss=0.2701, pruned_loss=0.05749, over 4890.00 frames. ], tot_loss[loss=0.1793, simple_loss=0.2505, pruned_loss=0.05404, over 952938.92 frames. ], batch size: 32, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:53:09,085 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.465e+01 1.491e+02 1.772e+02 2.217e+02 3.841e+02, threshold=3.544e+02, percent-clipped=1.0 2023-03-27 03:53:28,175 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6984, 1.5461, 1.6034, 1.6103, 1.3201, 3.8675, 1.4607, 1.8714], device='cuda:6'), covar=tensor([0.3435, 0.2707, 0.2158, 0.2503, 0.1701, 0.0160, 0.2615, 0.1309], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0124, 0.0114, 0.0096, 0.0095, 0.0096], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 03:53:40,844 INFO [finetune.py:976] (6/7) Epoch 23, batch 2300, loss[loss=0.2262, simple_loss=0.2905, pruned_loss=0.08095, over 4140.00 frames. ], tot_loss[loss=0.1782, simple_loss=0.2502, pruned_loss=0.05314, over 951609.62 frames. ], batch size: 65, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:53:47,301 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128318.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:53:47,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4098, 1.3084, 1.2931, 1.4040, 1.6540, 1.5656, 1.4082, 1.2681], device='cuda:6'), covar=tensor([0.0353, 0.0320, 0.0608, 0.0312, 0.0231, 0.0524, 0.0346, 0.0376], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0144, 0.0111, 0.0100, 0.0111, 0.0101, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7315e-05, 8.1618e-05, 1.1282e-04, 8.5458e-05, 7.7366e-05, 8.1967e-05, 7.5318e-05, 8.5303e-05], device='cuda:6') 2023-03-27 03:53:48,503 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128320.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:54:13,559 INFO [finetune.py:976] (6/7) Epoch 23, batch 2350, loss[loss=0.1801, simple_loss=0.2502, pruned_loss=0.055, over 4821.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.2482, pruned_loss=0.05282, over 951619.13 frames. ], batch size: 39, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:54:15,917 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.510e+02 1.720e+02 2.103e+02 3.385e+02, threshold=3.440e+02, percent-clipped=0.0 2023-03-27 03:54:18,453 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128366.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:54:28,990 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128381.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:54:44,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3148, 1.9758, 2.6086, 4.3040, 3.0590, 2.9159, 0.7936, 3.5702], device='cuda:6'), covar=tensor([0.1528, 0.1424, 0.1340, 0.0452, 0.0608, 0.1383, 0.2055, 0.0424], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0164, 0.0100, 0.0137, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 03:54:46,896 INFO [finetune.py:976] (6/7) Epoch 23, batch 2400, loss[loss=0.1247, simple_loss=0.2033, pruned_loss=0.02303, over 4827.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2449, pruned_loss=0.05154, over 953753.83 frames. ], batch size: 39, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:54:49,449 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128413.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:55:06,828 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128435.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:55:35,543 INFO [finetune.py:976] (6/7) Epoch 23, batch 2450, loss[loss=0.2364, simple_loss=0.2895, pruned_loss=0.09164, over 4792.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2428, pruned_loss=0.05136, over 953855.94 frames. ], batch size: 29, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:55:41,389 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.054e+01 1.492e+02 1.779e+02 2.209e+02 4.084e+02, threshold=3.557e+02, percent-clipped=2.0 2023-03-27 03:55:49,827 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128474.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:55:53,385 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8832, 2.2148, 1.1594, 2.7658, 3.1893, 2.2231, 2.7142, 2.2572], device='cuda:6'), covar=tensor([0.1042, 0.1587, 0.1670, 0.0821, 0.1227, 0.1504, 0.0968, 0.1710], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 03:55:55,748 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128483.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:10,274 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:12,480 INFO [finetune.py:976] (6/7) Epoch 23, batch 2500, loss[loss=0.1659, simple_loss=0.2367, pruned_loss=0.04753, over 4910.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2434, pruned_loss=0.05139, over 955576.28 frames. ], batch size: 35, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:56:25,973 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128528.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:48,371 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128554.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:56:55,781 INFO [finetune.py:976] (6/7) Epoch 23, batch 2550, loss[loss=0.2001, simple_loss=0.2721, pruned_loss=0.06406, over 4934.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2456, pruned_loss=0.05191, over 955554.13 frames. ], batch size: 33, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:56:58,080 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.0319, 3.5313, 3.7135, 3.9475, 3.8045, 3.5934, 4.1419, 1.4399], device='cuda:6'), covar=tensor([0.0908, 0.0863, 0.0953, 0.1000, 0.1358, 0.1789, 0.0805, 0.5741], device='cuda:6'), in_proj_covar=tensor([0.0347, 0.0246, 0.0280, 0.0291, 0.0338, 0.0286, 0.0302, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:56:58,591 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.110e+02 1.563e+02 1.837e+02 2.202e+02 4.665e+02, threshold=3.674e+02, percent-clipped=3.0 2023-03-27 03:57:06,933 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 03:57:11,630 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=128576.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:57:33,478 INFO [finetune.py:976] (6/7) Epoch 23, batch 2600, loss[loss=0.1753, simple_loss=0.2519, pruned_loss=0.04936, over 4752.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2465, pruned_loss=0.05113, over 956583.57 frames. ], batch size: 27, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:57:34,807 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0345, 1.0397, 0.9666, 1.1485, 1.2240, 1.1424, 1.0030, 0.9723], device='cuda:6'), covar=tensor([0.0383, 0.0320, 0.0666, 0.0314, 0.0276, 0.0476, 0.0408, 0.0422], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0106, 0.0143, 0.0111, 0.0099, 0.0110, 0.0101, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.6679e-05, 8.1125e-05, 1.1217e-04, 8.4799e-05, 7.7059e-05, 8.1363e-05, 7.4980e-05, 8.4777e-05], device='cuda:6') 2023-03-27 03:57:56,087 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 03:58:07,053 INFO [finetune.py:976] (6/7) Epoch 23, batch 2650, loss[loss=0.1439, simple_loss=0.2216, pruned_loss=0.03308, over 4727.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2482, pruned_loss=0.05176, over 956805.61 frames. ], batch size: 23, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:58:08,891 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.089e+02 1.579e+02 1.820e+02 2.183e+02 3.562e+02, threshold=3.640e+02, percent-clipped=0.0 2023-03-27 03:58:18,914 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128676.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:58:34,009 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-27 03:58:40,906 INFO [finetune.py:976] (6/7) Epoch 23, batch 2700, loss[loss=0.1921, simple_loss=0.2493, pruned_loss=0.06744, over 4833.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2462, pruned_loss=0.0501, over 958218.34 frames. ], batch size: 49, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:59:11,774 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4739, 1.3712, 1.4821, 0.8150, 1.5426, 1.5496, 1.4154, 1.3281], device='cuda:6'), covar=tensor([0.0558, 0.0863, 0.0679, 0.0899, 0.0857, 0.0693, 0.0646, 0.1248], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0134, 0.0137, 0.0118, 0.0123, 0.0135, 0.0136, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 03:59:14,674 INFO [finetune.py:976] (6/7) Epoch 23, batch 2750, loss[loss=0.1451, simple_loss=0.2268, pruned_loss=0.0317, over 4791.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2447, pruned_loss=0.05021, over 957434.14 frames. ], batch size: 29, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:59:16,469 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.314e+01 1.516e+02 1.813e+02 2.146e+02 3.615e+02, threshold=3.627e+02, percent-clipped=0.0 2023-03-27 03:59:21,321 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=128769.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:59:27,045 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9266, 4.6163, 4.3448, 2.3378, 4.7422, 3.6904, 0.8807, 3.3096], device='cuda:6'), covar=tensor([0.2401, 0.1526, 0.1462, 0.3137, 0.0691, 0.0752, 0.4606, 0.1307], device='cuda:6'), in_proj_covar=tensor([0.0155, 0.0180, 0.0161, 0.0130, 0.0163, 0.0125, 0.0150, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 03:59:30,026 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128781.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 03:59:30,035 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4507, 1.4043, 1.1663, 1.2218, 1.8541, 1.7877, 1.4621, 1.3515], device='cuda:6'), covar=tensor([0.0386, 0.0395, 0.0758, 0.0409, 0.0253, 0.0437, 0.0345, 0.0457], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0106, 0.0143, 0.0110, 0.0099, 0.0110, 0.0101, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.6743e-05, 8.1308e-05, 1.1236e-04, 8.4567e-05, 7.7116e-05, 8.1441e-05, 7.4986e-05, 8.4585e-05], device='cuda:6') 2023-03-27 03:59:48,512 INFO [finetune.py:976] (6/7) Epoch 23, batch 2800, loss[loss=0.1514, simple_loss=0.2271, pruned_loss=0.0378, over 4826.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2423, pruned_loss=0.04975, over 956143.16 frames. ], batch size: 39, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 03:59:50,453 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1640, 2.0138, 1.7679, 1.9588, 1.9369, 1.9194, 1.9752, 2.7199], device='cuda:6'), covar=tensor([0.3565, 0.4119, 0.3196, 0.3998, 0.4136, 0.2373, 0.3792, 0.1597], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0262, 0.0235, 0.0276, 0.0255, 0.0227, 0.0254, 0.0236], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:00:10,900 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128842.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:00:22,071 INFO [finetune.py:976] (6/7) Epoch 23, batch 2850, loss[loss=0.1395, simple_loss=0.2096, pruned_loss=0.03471, over 4766.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2416, pruned_loss=0.05008, over 954765.96 frames. ], batch size: 26, lr: 3.10e-03, grad_scale: 32.0 2023-03-27 04:00:23,885 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.021e+01 1.432e+02 1.754e+02 2.169e+02 4.729e+02, threshold=3.508e+02, percent-clipped=1.0 2023-03-27 04:00:30,444 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4561, 1.5385, 1.2262, 1.4871, 1.8117, 1.6913, 1.4630, 1.2628], device='cuda:6'), covar=tensor([0.0363, 0.0292, 0.0583, 0.0295, 0.0206, 0.0458, 0.0325, 0.0394], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0107, 0.0145, 0.0111, 0.0100, 0.0111, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7292e-05, 8.1900e-05, 1.1326e-04, 8.5308e-05, 7.7440e-05, 8.2125e-05, 7.5647e-05, 8.5244e-05], device='cuda:6') 2023-03-27 04:00:55,190 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=128890.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:01:00,348 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 04:01:07,640 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7354, 1.5942, 2.0908, 3.4013, 2.2931, 2.5259, 0.9130, 2.7835], device='cuda:6'), covar=tensor([0.1669, 0.1284, 0.1267, 0.0550, 0.0768, 0.1279, 0.1847, 0.0491], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0117, 0.0134, 0.0165, 0.0100, 0.0137, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 04:01:08,143 INFO [finetune.py:976] (6/7) Epoch 23, batch 2900, loss[loss=0.1717, simple_loss=0.2471, pruned_loss=0.04809, over 4813.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.245, pruned_loss=0.05112, over 956028.39 frames. ], batch size: 38, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:01:36,738 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=128951.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:01:41,990 INFO [finetune.py:976] (6/7) Epoch 23, batch 2950, loss[loss=0.1218, simple_loss=0.1906, pruned_loss=0.0265, over 4138.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2471, pruned_loss=0.05158, over 953119.18 frames. ], batch size: 18, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:01:43,784 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.806e+01 1.599e+02 2.012e+02 2.314e+02 4.261e+02, threshold=4.024e+02, percent-clipped=1.0 2023-03-27 04:01:50,830 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=5.01 vs. limit=5.0 2023-03-27 04:01:52,817 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=128976.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:02:18,832 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8663, 1.3895, 1.8951, 1.8875, 1.6968, 1.6220, 1.8430, 1.7646], device='cuda:6'), covar=tensor([0.4027, 0.3718, 0.2978, 0.3383, 0.4372, 0.3571, 0.3950, 0.2812], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0243, 0.0262, 0.0287, 0.0285, 0.0262, 0.0293, 0.0246], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:02:30,390 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3942, 2.2703, 1.9494, 0.8834, 2.0447, 1.8657, 1.7604, 2.1049], device='cuda:6'), covar=tensor([0.0838, 0.0755, 0.1600, 0.2078, 0.1355, 0.2130, 0.2201, 0.0842], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0182, 0.0210, 0.0208, 0.0224, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:02:31,488 INFO [finetune.py:976] (6/7) Epoch 23, batch 3000, loss[loss=0.2143, simple_loss=0.2888, pruned_loss=0.06987, over 4926.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2505, pruned_loss=0.05279, over 955310.89 frames. ], batch size: 33, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:02:31,488 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 04:02:42,304 INFO [finetune.py:1010] (6/7) Epoch 23, validation: loss=0.1567, simple_loss=0.225, pruned_loss=0.04424, over 2265189.00 frames. 2023-03-27 04:02:42,305 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 04:02:51,933 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129024.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:03:14,563 INFO [finetune.py:976] (6/7) Epoch 23, batch 3050, loss[loss=0.1536, simple_loss=0.2246, pruned_loss=0.04128, over 4920.00 frames. ], tot_loss[loss=0.1779, simple_loss=0.2503, pruned_loss=0.05272, over 956053.10 frames. ], batch size: 33, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:03:16,833 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.606e+02 1.899e+02 2.236e+02 5.313e+02, threshold=3.798e+02, percent-clipped=3.0 2023-03-27 04:03:22,095 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=129069.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:03:23,919 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7329, 1.2575, 0.7569, 1.5577, 2.1177, 1.0555, 1.5128, 1.4838], device='cuda:6'), covar=tensor([0.1421, 0.1968, 0.1941, 0.1127, 0.1783, 0.1885, 0.1371, 0.1928], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0111, 0.0092, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:03:47,838 INFO [finetune.py:976] (6/7) Epoch 23, batch 3100, loss[loss=0.156, simple_loss=0.2362, pruned_loss=0.03787, over 4830.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2479, pruned_loss=0.05138, over 957793.80 frames. ], batch size: 33, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:03:51,814 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-27 04:03:53,282 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129117.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:04:06,406 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129137.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:04:20,603 INFO [finetune.py:976] (6/7) Epoch 23, batch 3150, loss[loss=0.1529, simple_loss=0.226, pruned_loss=0.0399, over 4834.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2458, pruned_loss=0.05171, over 955408.44 frames. ], batch size: 30, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:04:22,456 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.200e+02 1.503e+02 1.751e+02 2.296e+02 3.694e+02, threshold=3.502e+02, percent-clipped=0.0 2023-03-27 04:04:41,141 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 04:05:01,880 INFO [finetune.py:976] (6/7) Epoch 23, batch 3200, loss[loss=0.1543, simple_loss=0.2242, pruned_loss=0.04222, over 4908.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2425, pruned_loss=0.05041, over 956880.45 frames. ], batch size: 43, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:05:26,800 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129246.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:05:37,438 INFO [finetune.py:976] (6/7) Epoch 23, batch 3250, loss[loss=0.1898, simple_loss=0.2622, pruned_loss=0.05869, over 4816.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2434, pruned_loss=0.05127, over 957738.44 frames. ], batch size: 33, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:05:39,769 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.902e+01 1.487e+02 1.735e+02 2.015e+02 4.622e+02, threshold=3.470e+02, percent-clipped=1.0 2023-03-27 04:06:15,221 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7672, 1.3057, 1.3056, 0.7820, 1.4531, 1.6543, 1.4826, 1.3338], device='cuda:6'), covar=tensor([0.0877, 0.0878, 0.0521, 0.0600, 0.0619, 0.0558, 0.0506, 0.0760], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0123, 0.0132, 0.0130, 0.0141, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9779e-05, 1.0799e-04, 9.0828e-05, 8.6773e-05, 9.2428e-05, 9.2478e-05, 1.0091e-04, 1.0667e-04], device='cuda:6') 2023-03-27 04:06:22,329 INFO [finetune.py:976] (6/7) Epoch 23, batch 3300, loss[loss=0.1651, simple_loss=0.2515, pruned_loss=0.03932, over 4766.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2464, pruned_loss=0.05176, over 956558.28 frames. ], batch size: 28, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:06:32,290 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.51 vs. limit=2.0 2023-03-27 04:06:56,063 INFO [finetune.py:976] (6/7) Epoch 23, batch 3350, loss[loss=0.207, simple_loss=0.2935, pruned_loss=0.06025, over 4900.00 frames. ], tot_loss[loss=0.1762, simple_loss=0.2484, pruned_loss=0.05205, over 957589.53 frames. ], batch size: 43, lr: 3.09e-03, grad_scale: 32.0 2023-03-27 04:06:57,833 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.264e+01 1.616e+02 1.807e+02 2.144e+02 4.365e+02, threshold=3.613e+02, percent-clipped=1.0 2023-03-27 04:07:10,600 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 04:07:47,795 INFO [finetune.py:976] (6/7) Epoch 23, batch 3400, loss[loss=0.1332, simple_loss=0.2132, pruned_loss=0.02662, over 4737.00 frames. ], tot_loss[loss=0.177, simple_loss=0.2493, pruned_loss=0.05236, over 957425.72 frames. ], batch size: 27, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:07:50,377 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=129413.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:08:06,763 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6274, 1.1667, 0.7487, 1.4222, 2.0170, 0.7197, 1.3695, 1.3730], device='cuda:6'), covar=tensor([0.1520, 0.2074, 0.1710, 0.1158, 0.1788, 0.1939, 0.1483, 0.2022], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0094, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:08:07,363 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=129437.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:08:21,052 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 04:08:21,220 INFO [finetune.py:976] (6/7) Epoch 23, batch 3450, loss[loss=0.1095, simple_loss=0.1772, pruned_loss=0.02084, over 4737.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2492, pruned_loss=0.05218, over 955580.23 frames. ], batch size: 23, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:08:23,467 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.549e+02 1.981e+02 2.372e+02 4.494e+02, threshold=3.962e+02, percent-clipped=6.0 2023-03-27 04:08:31,794 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=129474.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:08:39,421 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129485.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:08:48,665 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 04:08:54,913 INFO [finetune.py:976] (6/7) Epoch 23, batch 3500, loss[loss=0.1854, simple_loss=0.2603, pruned_loss=0.0552, over 4765.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.247, pruned_loss=0.05143, over 955589.52 frames. ], batch size: 26, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:09:20,966 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=129546.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:09:28,786 INFO [finetune.py:976] (6/7) Epoch 23, batch 3550, loss[loss=0.1421, simple_loss=0.2053, pruned_loss=0.03944, over 4923.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.244, pruned_loss=0.05031, over 955811.97 frames. ], batch size: 33, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:09:30,571 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.427e+01 1.468e+02 1.701e+02 2.000e+02 4.470e+02, threshold=3.402e+02, percent-clipped=1.0 2023-03-27 04:09:31,906 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7285, 1.5980, 2.0532, 1.3023, 1.7541, 1.9636, 1.5167, 2.1722], device='cuda:6'), covar=tensor([0.1261, 0.2125, 0.1256, 0.1672, 0.0935, 0.1307, 0.2884, 0.0840], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0206, 0.0192, 0.0189, 0.0173, 0.0214, 0.0216, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:09:52,441 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=129594.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:10:11,048 INFO [finetune.py:976] (6/7) Epoch 23, batch 3600, loss[loss=0.1662, simple_loss=0.2316, pruned_loss=0.05037, over 4755.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2415, pruned_loss=0.04981, over 955992.35 frames. ], batch size: 28, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:10:12,954 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=129612.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:10:18,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6932, 0.7219, 1.7439, 1.6658, 1.5619, 1.4944, 1.6166, 1.6646], device='cuda:6'), covar=tensor([0.3534, 0.3869, 0.3233, 0.3134, 0.4353, 0.3425, 0.3704, 0.2785], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0244, 0.0264, 0.0288, 0.0286, 0.0262, 0.0295, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:10:44,782 INFO [finetune.py:976] (6/7) Epoch 23, batch 3650, loss[loss=0.2165, simple_loss=0.2857, pruned_loss=0.07365, over 4827.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2429, pruned_loss=0.05038, over 954874.48 frames. ], batch size: 40, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:10:46,577 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.535e+02 1.802e+02 2.202e+02 3.404e+02, threshold=3.605e+02, percent-clipped=1.0 2023-03-27 04:10:53,483 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=129673.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:10:58,216 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7807, 1.7505, 1.4825, 1.9083, 2.2633, 1.8463, 1.5038, 1.4627], device='cuda:6'), covar=tensor([0.2345, 0.2134, 0.2039, 0.1632, 0.1468, 0.1219, 0.2278, 0.2001], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0209, 0.0212, 0.0196, 0.0243, 0.0189, 0.0216, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:11:08,826 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0801, 1.7947, 1.7280, 1.8339, 1.7556, 1.7887, 1.8185, 2.5138], device='cuda:6'), covar=tensor([0.3016, 0.3813, 0.2858, 0.3423, 0.3663, 0.2119, 0.3516, 0.1435], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0234, 0.0275, 0.0255, 0.0226, 0.0252, 0.0236], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:11:23,355 INFO [finetune.py:976] (6/7) Epoch 23, batch 3700, loss[loss=0.1729, simple_loss=0.2445, pruned_loss=0.05063, over 4864.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2455, pruned_loss=0.05111, over 952763.35 frames. ], batch size: 44, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:11:27,933 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5017, 1.4425, 1.8481, 2.9635, 1.9526, 2.1840, 0.9829, 2.4597], device='cuda:6'), covar=tensor([0.1650, 0.1389, 0.1166, 0.0578, 0.0818, 0.1277, 0.1689, 0.0504], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0163, 0.0100, 0.0136, 0.0125, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 04:11:55,696 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5380, 1.3948, 1.2568, 1.5160, 1.6445, 1.5352, 1.0177, 1.3043], device='cuda:6'), covar=tensor([0.2025, 0.1947, 0.1863, 0.1509, 0.1515, 0.1156, 0.2477, 0.1750], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0210, 0.0213, 0.0197, 0.0244, 0.0190, 0.0217, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:12:00,259 INFO [finetune.py:976] (6/7) Epoch 23, batch 3750, loss[loss=0.1518, simple_loss=0.2363, pruned_loss=0.03367, over 4816.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2484, pruned_loss=0.05233, over 952784.45 frames. ], batch size: 40, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:12:02,070 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.237e+01 1.564e+02 1.874e+02 2.284e+02 3.839e+02, threshold=3.748e+02, percent-clipped=2.0 2023-03-27 04:12:06,378 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129769.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:12:06,434 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5064, 1.4469, 1.2947, 1.5201, 1.7876, 1.6805, 1.4522, 1.2708], device='cuda:6'), covar=tensor([0.0350, 0.0300, 0.0590, 0.0283, 0.0219, 0.0377, 0.0354, 0.0426], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0107, 0.0145, 0.0111, 0.0099, 0.0111, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7477e-05, 8.1799e-05, 1.1324e-04, 8.5153e-05, 7.7341e-05, 8.2336e-05, 7.5803e-05, 8.5047e-05], device='cuda:6') 2023-03-27 04:12:20,812 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=129791.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:12:35,161 INFO [finetune.py:976] (6/7) Epoch 23, batch 3800, loss[loss=0.1411, simple_loss=0.2183, pruned_loss=0.03192, over 4803.00 frames. ], tot_loss[loss=0.1758, simple_loss=0.2482, pruned_loss=0.05175, over 952430.97 frames. ], batch size: 25, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:13:16,863 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=129852.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:13:21,831 INFO [finetune.py:976] (6/7) Epoch 23, batch 3850, loss[loss=0.1667, simple_loss=0.2294, pruned_loss=0.05199, over 4831.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2466, pruned_loss=0.0512, over 954022.43 frames. ], batch size: 25, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:13:24,150 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.288e+01 1.597e+02 1.881e+02 2.160e+02 3.613e+02, threshold=3.763e+02, percent-clipped=0.0 2023-03-27 04:13:31,028 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2811, 1.9233, 1.8716, 0.9085, 2.1501, 2.3614, 2.1162, 1.8264], device='cuda:6'), covar=tensor([0.0942, 0.0762, 0.0498, 0.0741, 0.0568, 0.0769, 0.0430, 0.0795], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0122, 0.0131, 0.0129, 0.0141, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9363e-05, 1.0727e-04, 9.0818e-05, 8.6206e-05, 9.2306e-05, 9.2062e-05, 1.0071e-04, 1.0622e-04], device='cuda:6') 2023-03-27 04:13:55,058 INFO [finetune.py:976] (6/7) Epoch 23, batch 3900, loss[loss=0.1643, simple_loss=0.2378, pruned_loss=0.04537, over 4271.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2449, pruned_loss=0.05134, over 951923.65 frames. ], batch size: 65, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:14:27,720 INFO [finetune.py:976] (6/7) Epoch 23, batch 3950, loss[loss=0.1514, simple_loss=0.2205, pruned_loss=0.04121, over 4898.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2412, pruned_loss=0.05029, over 949135.00 frames. ], batch size: 32, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:14:29,949 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.481e+02 1.820e+02 2.101e+02 4.779e+02, threshold=3.640e+02, percent-clipped=1.0 2023-03-27 04:14:34,549 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=129968.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:14:56,371 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130000.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:14:58,080 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130002.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:15:02,812 INFO [finetune.py:976] (6/7) Epoch 23, batch 4000, loss[loss=0.1515, simple_loss=0.2216, pruned_loss=0.04067, over 4903.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2402, pruned_loss=0.05005, over 951425.53 frames. ], batch size: 35, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:15:45,415 INFO [finetune.py:976] (6/7) Epoch 23, batch 4050, loss[loss=0.2095, simple_loss=0.2906, pruned_loss=0.06419, over 4835.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2446, pruned_loss=0.05158, over 953151.37 frames. ], batch size: 47, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:15:47,194 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130061.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:15:47,689 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.665e+02 1.960e+02 2.479e+02 5.275e+02, threshold=3.921e+02, percent-clipped=4.0 2023-03-27 04:15:48,460 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130063.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:15:53,612 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130069.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:16:19,204 INFO [finetune.py:976] (6/7) Epoch 23, batch 4100, loss[loss=0.25, simple_loss=0.311, pruned_loss=0.09454, over 4219.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2466, pruned_loss=0.05178, over 951320.92 frames. ], batch size: 65, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:16:26,590 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130117.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:16:35,156 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6035, 1.6367, 1.3146, 1.6124, 1.9176, 1.8609, 1.5732, 1.3575], device='cuda:6'), covar=tensor([0.0371, 0.0298, 0.0594, 0.0279, 0.0224, 0.0379, 0.0325, 0.0393], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0144, 0.0111, 0.0099, 0.0111, 0.0101, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7233e-05, 8.1645e-05, 1.1273e-04, 8.5094e-05, 7.7158e-05, 8.2311e-05, 7.5535e-05, 8.4928e-05], device='cuda:6') 2023-03-27 04:16:47,830 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9489, 2.5790, 2.4665, 1.3905, 2.5240, 1.9846, 1.8928, 2.2909], device='cuda:6'), covar=tensor([0.1010, 0.0943, 0.1850, 0.2136, 0.1747, 0.2471, 0.2490, 0.1302], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0200, 0.0182, 0.0211, 0.0209, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:16:54,990 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130147.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:17:02,633 INFO [finetune.py:976] (6/7) Epoch 23, batch 4150, loss[loss=0.1894, simple_loss=0.2647, pruned_loss=0.05703, over 4759.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2474, pruned_loss=0.05174, over 953657.65 frames. ], batch size: 26, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:17:04,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.505e+02 1.863e+02 2.291e+02 4.324e+02, threshold=3.726e+02, percent-clipped=3.0 2023-03-27 04:17:32,063 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7170, 1.5985, 1.5873, 1.6569, 1.3919, 3.8253, 1.5429, 1.9900], device='cuda:6'), covar=tensor([0.3360, 0.2648, 0.2263, 0.2439, 0.1680, 0.0172, 0.2465, 0.1282], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:17:34,957 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8212, 1.7043, 1.5017, 1.5240, 1.8729, 1.5837, 1.8225, 1.8157], device='cuda:6'), covar=tensor([0.1325, 0.1855, 0.2863, 0.2348, 0.2455, 0.1682, 0.2657, 0.1692], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0255, 0.0250, 0.0206, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:17:36,690 INFO [finetune.py:976] (6/7) Epoch 23, batch 4200, loss[loss=0.1591, simple_loss=0.2344, pruned_loss=0.04194, over 4901.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2469, pruned_loss=0.0511, over 954487.35 frames. ], batch size: 46, lr: 3.09e-03, grad_scale: 64.0 2023-03-27 04:18:15,285 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9457, 1.8336, 1.5531, 1.4970, 1.9613, 1.6983, 1.8478, 1.9464], device='cuda:6'), covar=tensor([0.1406, 0.1939, 0.3114, 0.2421, 0.2523, 0.1784, 0.2727, 0.1753], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0188, 0.0235, 0.0253, 0.0248, 0.0205, 0.0213, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:18:24,042 INFO [finetune.py:976] (6/7) Epoch 23, batch 4250, loss[loss=0.1572, simple_loss=0.2268, pruned_loss=0.04374, over 4821.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2455, pruned_loss=0.05128, over 954070.04 frames. ], batch size: 30, lr: 3.08e-03, grad_scale: 64.0 2023-03-27 04:18:25,854 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.516e+02 1.759e+02 2.094e+02 3.793e+02, threshold=3.518e+02, percent-clipped=1.0 2023-03-27 04:18:27,894 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 04:18:30,082 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130268.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:18:40,557 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3074, 3.7816, 3.9611, 4.0852, 4.0519, 3.8565, 4.3901, 1.4187], device='cuda:6'), covar=tensor([0.0673, 0.0798, 0.0834, 0.1011, 0.1062, 0.1442, 0.0650, 0.5613], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0245, 0.0278, 0.0292, 0.0337, 0.0286, 0.0304, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:18:44,131 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9423, 1.9147, 1.6689, 2.0661, 2.3364, 2.0769, 1.6043, 1.6160], device='cuda:6'), covar=tensor([0.2061, 0.1724, 0.1830, 0.1542, 0.1514, 0.1162, 0.2317, 0.1912], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0196, 0.0245, 0.0190, 0.0216, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:18:55,818 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1405, 1.7500, 2.1711, 2.0850, 1.8669, 1.8372, 2.0747, 2.0499], device='cuda:6'), covar=tensor([0.3716, 0.4200, 0.3218, 0.4140, 0.5050, 0.4053, 0.4794, 0.2979], device='cuda:6'), in_proj_covar=tensor([0.0256, 0.0241, 0.0261, 0.0284, 0.0284, 0.0260, 0.0291, 0.0245], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:18:57,490 INFO [finetune.py:976] (6/7) Epoch 23, batch 4300, loss[loss=0.1478, simple_loss=0.2224, pruned_loss=0.03657, over 4763.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2427, pruned_loss=0.05031, over 955844.22 frames. ], batch size: 28, lr: 3.08e-03, grad_scale: 64.0 2023-03-27 04:19:02,813 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130316.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:19:29,235 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:19:30,474 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130358.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:19:31,031 INFO [finetune.py:976] (6/7) Epoch 23, batch 4350, loss[loss=0.1745, simple_loss=0.2415, pruned_loss=0.05371, over 4693.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2409, pruned_loss=0.04987, over 954967.52 frames. ], batch size: 23, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:19:33,424 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.417e+02 1.746e+02 2.112e+02 4.412e+02, threshold=3.492e+02, percent-clipped=1.0 2023-03-27 04:19:48,220 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130384.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:04,342 INFO [finetune.py:976] (6/7) Epoch 23, batch 4400, loss[loss=0.1737, simple_loss=0.2443, pruned_loss=0.05156, over 4926.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2426, pruned_loss=0.05071, over 956194.09 frames. ], batch size: 38, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:20:19,491 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:24,183 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130438.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:30,573 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:31,761 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130447.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:20:46,861 INFO [finetune.py:976] (6/7) Epoch 23, batch 4450, loss[loss=0.1944, simple_loss=0.2731, pruned_loss=0.0578, over 4912.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2456, pruned_loss=0.05139, over 953113.18 frames. ], batch size: 37, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:20:49,238 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.491e+02 1.813e+02 2.246e+02 3.707e+02, threshold=3.626e+02, percent-clipped=3.0 2023-03-27 04:21:10,560 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:21:11,731 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130495.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:21:14,694 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130499.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:21:20,677 INFO [finetune.py:976] (6/7) Epoch 23, batch 4500, loss[loss=0.1861, simple_loss=0.2524, pruned_loss=0.05989, over 4195.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2465, pruned_loss=0.05104, over 954028.47 frames. ], batch size: 65, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:21:53,318 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130548.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:22:04,073 INFO [finetune.py:976] (6/7) Epoch 23, batch 4550, loss[loss=0.1445, simple_loss=0.2168, pruned_loss=0.03613, over 4803.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2488, pruned_loss=0.05234, over 955234.75 frames. ], batch size: 25, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:22:06,502 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.768e+01 1.546e+02 1.777e+02 2.233e+02 3.779e+02, threshold=3.553e+02, percent-clipped=2.0 2023-03-27 04:22:07,813 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=130565.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:22:25,164 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4209, 2.2939, 1.8004, 2.3298, 2.3310, 2.0513, 2.6893, 2.3585], device='cuda:6'), covar=tensor([0.1448, 0.2169, 0.3249, 0.2620, 0.2570, 0.1833, 0.3044, 0.1917], device='cuda:6'), in_proj_covar=tensor([0.0187, 0.0188, 0.0234, 0.0252, 0.0247, 0.0204, 0.0212, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:22:37,457 INFO [finetune.py:976] (6/7) Epoch 23, batch 4600, loss[loss=0.1517, simple_loss=0.2235, pruned_loss=0.03996, over 4782.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2479, pruned_loss=0.05156, over 957094.12 frames. ], batch size: 29, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:22:37,573 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130609.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:22:47,769 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=130626.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:10,854 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130656.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:17,078 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=130658.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:23:17,586 INFO [finetune.py:976] (6/7) Epoch 23, batch 4650, loss[loss=0.1983, simple_loss=0.2645, pruned_loss=0.06606, over 4800.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.245, pruned_loss=0.05066, over 955444.06 frames. ], batch size: 51, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:23:19,986 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.906e+01 1.515e+02 1.768e+02 2.232e+02 6.495e+02, threshold=3.536e+02, percent-clipped=3.0 2023-03-27 04:23:26,354 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8389, 1.2731, 1.7449, 1.7095, 1.5645, 1.5604, 1.7084, 1.7766], device='cuda:6'), covar=tensor([0.4974, 0.4229, 0.3774, 0.4216, 0.5251, 0.4415, 0.4875, 0.3550], device='cuda:6'), in_proj_covar=tensor([0.0258, 0.0242, 0.0263, 0.0287, 0.0286, 0.0262, 0.0293, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:23:54,689 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:55,883 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=130706.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:23:58,153 INFO [finetune.py:976] (6/7) Epoch 23, batch 4700, loss[loss=0.1737, simple_loss=0.2351, pruned_loss=0.05618, over 4833.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2425, pruned_loss=0.05019, over 956944.04 frames. ], batch size: 30, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:23:59,222 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 04:24:18,058 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:24:31,364 INFO [finetune.py:976] (6/7) Epoch 23, batch 4750, loss[loss=0.2218, simple_loss=0.2864, pruned_loss=0.07864, over 4081.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2403, pruned_loss=0.04962, over 955853.83 frames. ], batch size: 65, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:24:34,232 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.253e+02 1.528e+02 1.803e+02 2.150e+02 3.686e+02, threshold=3.606e+02, percent-clipped=2.0 2023-03-27 04:24:49,883 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:24:54,007 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130794.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:25:04,659 INFO [finetune.py:976] (6/7) Epoch 23, batch 4800, loss[loss=0.1804, simple_loss=0.2532, pruned_loss=0.05384, over 4785.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2436, pruned_loss=0.05104, over 956001.96 frames. ], batch size: 29, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:25:29,540 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 04:25:31,603 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6900, 1.2374, 1.0193, 1.5689, 1.9944, 1.5377, 1.3993, 1.6117], device='cuda:6'), covar=tensor([0.1417, 0.2049, 0.1775, 0.1152, 0.2087, 0.2092, 0.1435, 0.1823], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0095, 0.0111, 0.0093, 0.0120, 0.0094, 0.0100, 0.0090], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:25:37,287 INFO [finetune.py:976] (6/7) Epoch 23, batch 4850, loss[loss=0.1124, simple_loss=0.1915, pruned_loss=0.01662, over 4748.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.247, pruned_loss=0.05192, over 955309.90 frames. ], batch size: 26, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:25:40,092 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.613e+02 1.947e+02 2.336e+02 6.046e+02, threshold=3.894e+02, percent-clipped=4.0 2023-03-27 04:26:15,656 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130904.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:26:16,948 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8339, 1.4131, 1.9267, 1.8232, 1.6285, 1.6052, 1.7675, 1.7871], device='cuda:6'), covar=tensor([0.3735, 0.3689, 0.2924, 0.3388, 0.4412, 0.3745, 0.3784, 0.2787], device='cuda:6'), in_proj_covar=tensor([0.0258, 0.0242, 0.0263, 0.0286, 0.0285, 0.0262, 0.0293, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:26:19,122 INFO [finetune.py:976] (6/7) Epoch 23, batch 4900, loss[loss=0.1372, simple_loss=0.2021, pruned_loss=0.03612, over 4246.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2479, pruned_loss=0.05209, over 953770.28 frames. ], batch size: 18, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:26:28,433 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=130921.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:26:52,304 INFO [finetune.py:976] (6/7) Epoch 23, batch 4950, loss[loss=0.1719, simple_loss=0.2463, pruned_loss=0.04879, over 4874.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2479, pruned_loss=0.05164, over 953130.01 frames. ], batch size: 34, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:26:57,592 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.290e+01 1.586e+02 1.789e+02 2.374e+02 3.586e+02, threshold=3.578e+02, percent-clipped=0.0 2023-03-27 04:27:27,180 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.68 vs. limit=5.0 2023-03-27 04:27:36,349 INFO [finetune.py:976] (6/7) Epoch 23, batch 5000, loss[loss=0.158, simple_loss=0.2313, pruned_loss=0.04236, over 4903.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2462, pruned_loss=0.0511, over 952612.70 frames. ], batch size: 46, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:27:42,761 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6289, 2.8082, 2.4590, 1.8069, 2.5281, 2.7047, 2.6740, 2.3227], device='cuda:6'), covar=tensor([0.0605, 0.0505, 0.0701, 0.0806, 0.0815, 0.0727, 0.0574, 0.0957], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0135, 0.0138, 0.0118, 0.0125, 0.0137, 0.0137, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:27:45,075 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7047, 1.5428, 1.4165, 1.7593, 1.6176, 1.7447, 0.9844, 1.4114], device='cuda:6'), covar=tensor([0.2097, 0.2044, 0.1943, 0.1607, 0.1531, 0.1150, 0.2439, 0.1873], device='cuda:6'), in_proj_covar=tensor([0.0243, 0.0209, 0.0212, 0.0195, 0.0242, 0.0189, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:27:57,572 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131040.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:09,935 INFO [finetune.py:976] (6/7) Epoch 23, batch 5050, loss[loss=0.1461, simple_loss=0.2105, pruned_loss=0.0409, over 4872.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2439, pruned_loss=0.05089, over 953723.66 frames. ], batch size: 31, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:28:12,372 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.381e+02 1.770e+02 2.059e+02 4.416e+02, threshold=3.539e+02, percent-clipped=4.0 2023-03-27 04:28:35,754 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2269, 2.1392, 2.2292, 1.5183, 2.1549, 2.2539, 2.2121, 1.8190], device='cuda:6'), covar=tensor([0.0641, 0.0701, 0.0676, 0.0885, 0.0719, 0.0792, 0.0618, 0.1122], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0134, 0.0138, 0.0118, 0.0125, 0.0137, 0.0136, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:28:41,482 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:41,541 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:45,161 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131094.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:28:57,921 INFO [finetune.py:976] (6/7) Epoch 23, batch 5100, loss[loss=0.1364, simple_loss=0.2126, pruned_loss=0.03008, over 4756.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2409, pruned_loss=0.05019, over 954321.07 frames. ], batch size: 26, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:29:17,262 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:29:20,885 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131142.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:29:31,083 INFO [finetune.py:976] (6/7) Epoch 23, batch 5150, loss[loss=0.1565, simple_loss=0.2385, pruned_loss=0.03726, over 4791.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2405, pruned_loss=0.04983, over 954727.00 frames. ], batch size: 29, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:29:34,468 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.214e+01 1.572e+02 1.903e+02 2.241e+02 4.010e+02, threshold=3.805e+02, percent-clipped=1.0 2023-03-27 04:29:34,578 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6236, 1.0760, 0.8815, 1.5097, 2.0508, 1.4544, 1.4098, 1.5654], device='cuda:6'), covar=tensor([0.1386, 0.2125, 0.1914, 0.1192, 0.1850, 0.2014, 0.1486, 0.1802], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0093, 0.0119, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:29:41,808 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131174.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:29:48,869 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2981, 1.3347, 1.5092, 1.4902, 1.4843, 2.9030, 1.2836, 1.4189], device='cuda:6'), covar=tensor([0.0963, 0.1770, 0.1207, 0.0965, 0.1595, 0.0313, 0.1455, 0.1751], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:30:01,338 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131204.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 04:30:01,382 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8949, 1.4181, 1.9870, 1.9451, 1.7104, 1.6735, 1.8700, 1.8127], device='cuda:6'), covar=tensor([0.3818, 0.3967, 0.3044, 0.3379, 0.4664, 0.3592, 0.4032, 0.2950], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0244, 0.0264, 0.0288, 0.0287, 0.0263, 0.0295, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:30:04,270 INFO [finetune.py:976] (6/7) Epoch 23, batch 5200, loss[loss=0.1829, simple_loss=0.2526, pruned_loss=0.05657, over 4860.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.245, pruned_loss=0.05094, over 955416.15 frames. ], batch size: 31, lr: 3.08e-03, grad_scale: 32.0 2023-03-27 04:30:12,617 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:30:22,988 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131235.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:30:28,711 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 04:30:33,247 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131252.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:30:37,394 INFO [finetune.py:976] (6/7) Epoch 23, batch 5250, loss[loss=0.1267, simple_loss=0.2124, pruned_loss=0.02047, over 4764.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2466, pruned_loss=0.05129, over 954408.78 frames. ], batch size: 28, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:30:40,884 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.531e+02 1.792e+02 2.239e+02 3.281e+02, threshold=3.585e+02, percent-clipped=0.0 2023-03-27 04:30:44,381 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131269.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:31:03,747 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 04:31:21,561 INFO [finetune.py:976] (6/7) Epoch 23, batch 5300, loss[loss=0.2587, simple_loss=0.3287, pruned_loss=0.09432, over 4197.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2478, pruned_loss=0.05184, over 954263.64 frames. ], batch size: 65, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:31:39,010 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 04:31:49,022 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2335, 2.0388, 1.7929, 1.9838, 1.9186, 2.0299, 1.9681, 2.7528], device='cuda:6'), covar=tensor([0.3767, 0.4408, 0.3193, 0.4063, 0.4182, 0.2379, 0.3910, 0.1705], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0260, 0.0231, 0.0273, 0.0253, 0.0224, 0.0251, 0.0233], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:31:54,360 INFO [finetune.py:976] (6/7) Epoch 23, batch 5350, loss[loss=0.173, simple_loss=0.2439, pruned_loss=0.05103, over 4803.00 frames. ], tot_loss[loss=0.1764, simple_loss=0.2484, pruned_loss=0.0522, over 954124.55 frames. ], batch size: 40, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:31:57,386 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.087e+02 1.529e+02 1.830e+02 2.196e+02 3.219e+02, threshold=3.659e+02, percent-clipped=0.0 2023-03-27 04:32:38,106 INFO [finetune.py:976] (6/7) Epoch 23, batch 5400, loss[loss=0.1311, simple_loss=0.2024, pruned_loss=0.02987, over 4824.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.246, pruned_loss=0.05152, over 953829.79 frames. ], batch size: 30, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:32:38,217 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131409.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:32:38,789 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6174, 1.4055, 2.2289, 3.3614, 2.1090, 2.3818, 1.3795, 2.7804], device='cuda:6'), covar=tensor([0.1710, 0.1498, 0.1125, 0.0506, 0.0849, 0.1359, 0.1467, 0.0451], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0136, 0.0123, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 04:32:42,430 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131416.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:32:50,102 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2434, 3.6768, 3.8741, 4.0463, 4.0235, 3.7753, 4.3141, 1.3812], device='cuda:6'), covar=tensor([0.0785, 0.0882, 0.0843, 0.0989, 0.1136, 0.1537, 0.0682, 0.5696], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0247, 0.0280, 0.0294, 0.0339, 0.0287, 0.0304, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:33:11,754 INFO [finetune.py:976] (6/7) Epoch 23, batch 5450, loss[loss=0.141, simple_loss=0.2135, pruned_loss=0.03423, over 4774.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2431, pruned_loss=0.0508, over 953101.64 frames. ], batch size: 26, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:33:14,789 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.514e+02 1.875e+02 2.409e+02 5.439e+02, threshold=3.749e+02, percent-clipped=4.0 2023-03-27 04:33:18,556 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131470.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:20,641 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.32 vs. limit=2.0 2023-03-27 04:33:23,304 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131477.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:33,328 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131492.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:33:40,299 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.92 vs. limit=2.0 2023-03-27 04:33:49,606 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8910, 1.7876, 1.6476, 1.9177, 2.1830, 1.9854, 1.4296, 1.5985], device='cuda:6'), covar=tensor([0.2095, 0.1838, 0.1827, 0.1587, 0.1492, 0.1094, 0.2352, 0.1931], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0213, 0.0196, 0.0243, 0.0190, 0.0216, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:33:51,832 INFO [finetune.py:976] (6/7) Epoch 23, batch 5500, loss[loss=0.1082, simple_loss=0.1836, pruned_loss=0.01639, over 4760.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2402, pruned_loss=0.04982, over 951876.22 frames. ], batch size: 26, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:34:13,189 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131530.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:34:29,165 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131553.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:34:33,175 INFO [finetune.py:976] (6/7) Epoch 23, batch 5550, loss[loss=0.1645, simple_loss=0.2503, pruned_loss=0.03939, over 4767.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2427, pruned_loss=0.05091, over 951968.27 frames. ], batch size: 54, lr: 3.08e-03, grad_scale: 16.0 2023-03-27 04:34:36,711 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.823e+01 1.422e+02 1.728e+02 2.186e+02 5.215e+02, threshold=3.457e+02, percent-clipped=2.0 2023-03-27 04:34:42,305 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0540, 1.9229, 1.6631, 1.9208, 1.7949, 1.8074, 1.8601, 2.5567], device='cuda:6'), covar=tensor([0.3686, 0.3730, 0.3047, 0.3215, 0.3615, 0.2466, 0.3573, 0.1605], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0275, 0.0255, 0.0226, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:34:57,060 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7264, 1.6255, 1.5424, 0.9507, 1.7684, 1.9219, 1.9054, 1.4553], device='cuda:6'), covar=tensor([0.1087, 0.0745, 0.0574, 0.0572, 0.0443, 0.0705, 0.0362, 0.0844], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0125, 0.0122, 0.0130, 0.0129, 0.0140, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.8263e-05, 1.0616e-04, 8.9239e-05, 8.5624e-05, 9.1150e-05, 9.1547e-05, 9.9597e-05, 1.0528e-04], device='cuda:6') 2023-03-27 04:35:04,717 INFO [finetune.py:976] (6/7) Epoch 23, batch 5600, loss[loss=0.1733, simple_loss=0.2424, pruned_loss=0.05212, over 4921.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2459, pruned_loss=0.05181, over 952986.96 frames. ], batch size: 38, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:35:06,542 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6880, 1.4521, 2.1134, 1.3667, 1.8361, 1.8964, 1.2759, 2.0491], device='cuda:6'), covar=tensor([0.1413, 0.2426, 0.1251, 0.1908, 0.1088, 0.1587, 0.3219, 0.1127], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0205, 0.0191, 0.0189, 0.0172, 0.0213, 0.0215, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:35:29,541 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 04:35:34,624 INFO [finetune.py:976] (6/7) Epoch 23, batch 5650, loss[loss=0.1633, simple_loss=0.2476, pruned_loss=0.03953, over 4857.00 frames. ], tot_loss[loss=0.1767, simple_loss=0.2484, pruned_loss=0.05253, over 951585.27 frames. ], batch size: 44, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:35:37,862 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.955e+01 1.494e+02 1.801e+02 2.339e+02 4.576e+02, threshold=3.601e+02, percent-clipped=4.0 2023-03-27 04:35:38,238 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.57 vs. limit=5.0 2023-03-27 04:36:04,503 INFO [finetune.py:976] (6/7) Epoch 23, batch 5700, loss[loss=0.206, simple_loss=0.2663, pruned_loss=0.07285, over 4284.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2463, pruned_loss=0.05192, over 938165.64 frames. ], batch size: 18, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:36:16,311 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0670, 1.8319, 2.4021, 3.7019, 2.5169, 2.8095, 1.4522, 3.0605], device='cuda:6'), covar=tensor([0.1581, 0.1400, 0.1282, 0.0571, 0.0810, 0.1139, 0.1709, 0.0470], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0123, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 04:36:40,048 INFO [finetune.py:976] (6/7) Epoch 24, batch 0, loss[loss=0.1549, simple_loss=0.2236, pruned_loss=0.04307, over 4198.00 frames. ], tot_loss[loss=0.1549, simple_loss=0.2236, pruned_loss=0.04307, over 4198.00 frames. ], batch size: 66, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:36:40,048 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 04:36:43,495 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1392, 1.9578, 1.8553, 1.8377, 1.8825, 2.0053, 1.9524, 2.6364], device='cuda:6'), covar=tensor([0.4227, 0.4789, 0.3334, 0.3666, 0.4354, 0.2573, 0.3757, 0.1832], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0275, 0.0256, 0.0226, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:36:49,749 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1354, 1.9339, 1.7875, 1.8011, 1.8810, 1.9165, 1.8961, 2.5875], device='cuda:6'), covar=tensor([0.3926, 0.4523, 0.3285, 0.3944, 0.3983, 0.2622, 0.3888, 0.1835], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0263, 0.0234, 0.0275, 0.0256, 0.0226, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:36:50,762 INFO [finetune.py:1010] (6/7) Epoch 24, validation: loss=0.1594, simple_loss=0.227, pruned_loss=0.04592, over 2265189.00 frames. 2023-03-27 04:36:50,762 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 04:36:54,264 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2498, 1.3861, 1.5699, 1.4432, 1.5803, 2.9337, 1.3724, 1.4788], device='cuda:6'), covar=tensor([0.1005, 0.1842, 0.1108, 0.1040, 0.1632, 0.0293, 0.1487, 0.1853], device='cuda:6'), in_proj_covar=tensor([0.0073, 0.0081, 0.0072, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:37:07,462 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.206e+01 1.398e+02 1.674e+02 2.004e+02 3.219e+02, threshold=3.348e+02, percent-clipped=0.0 2023-03-27 04:37:08,163 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131765.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:37:12,925 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131772.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:37:25,289 INFO [finetune.py:976] (6/7) Epoch 24, batch 50, loss[loss=0.1679, simple_loss=0.254, pruned_loss=0.04084, over 4923.00 frames. ], tot_loss[loss=0.1786, simple_loss=0.2509, pruned_loss=0.05316, over 217759.79 frames. ], batch size: 42, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:38:02,617 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=131830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:04,480 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5838, 1.5611, 1.3582, 1.5891, 1.9450, 1.8628, 1.6381, 1.3947], device='cuda:6'), covar=tensor([0.0354, 0.0308, 0.0615, 0.0263, 0.0203, 0.0443, 0.0304, 0.0439], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0105, 0.0142, 0.0110, 0.0099, 0.0111, 0.0100, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.6767e-05, 8.0810e-05, 1.1125e-04, 8.4142e-05, 7.6696e-05, 8.1777e-05, 7.4386e-05, 8.4478e-05], device='cuda:6') 2023-03-27 04:38:07,277 INFO [finetune.py:976] (6/7) Epoch 24, batch 100, loss[loss=0.1579, simple_loss=0.2253, pruned_loss=0.04527, over 4825.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2432, pruned_loss=0.05145, over 380703.04 frames. ], batch size: 33, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:38:15,492 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=131848.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:20,990 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=131857.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:25,099 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.662e+01 1.465e+02 1.761e+02 2.142e+02 3.724e+02, threshold=3.523e+02, percent-clipped=1.0 2023-03-27 04:38:34,599 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=131878.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:38:40,507 INFO [finetune.py:976] (6/7) Epoch 24, batch 150, loss[loss=0.1714, simple_loss=0.2462, pruned_loss=0.04826, over 4762.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2391, pruned_loss=0.04996, over 508092.78 frames. ], batch size: 27, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:38:45,056 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4672, 1.4821, 1.9339, 1.7119, 1.5989, 3.3474, 1.4538, 1.5550], device='cuda:6'), covar=tensor([0.1020, 0.1769, 0.1051, 0.0985, 0.1556, 0.0258, 0.1478, 0.1839], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:39:10,819 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=131918.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:39:29,542 INFO [finetune.py:976] (6/7) Epoch 24, batch 200, loss[loss=0.173, simple_loss=0.2381, pruned_loss=0.05394, over 4817.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2378, pruned_loss=0.04937, over 608224.51 frames. ], batch size: 25, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:39:51,182 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.517e+02 1.799e+02 2.123e+02 6.232e+02, threshold=3.598e+02, percent-clipped=3.0 2023-03-27 04:39:52,162 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.33 vs. limit=5.0 2023-03-27 04:40:06,649 INFO [finetune.py:976] (6/7) Epoch 24, batch 250, loss[loss=0.1616, simple_loss=0.2396, pruned_loss=0.04181, over 4762.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2416, pruned_loss=0.04976, over 687349.24 frames. ], batch size: 26, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:40:10,400 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.67 vs. limit=2.0 2023-03-27 04:40:36,950 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132031.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:40:41,434 INFO [finetune.py:976] (6/7) Epoch 24, batch 300, loss[loss=0.2378, simple_loss=0.301, pruned_loss=0.08733, over 4914.00 frames. ], tot_loss[loss=0.1757, simple_loss=0.2472, pruned_loss=0.05203, over 745589.97 frames. ], batch size: 36, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:40:53,258 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132054.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:40:53,845 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7180, 1.5339, 1.5512, 1.5893, 1.1168, 3.6564, 1.4047, 1.8310], device='cuda:6'), covar=tensor([0.3096, 0.2528, 0.2191, 0.2462, 0.1888, 0.0182, 0.2584, 0.1286], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:40:59,158 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.635e+02 1.887e+02 2.261e+02 6.512e+02, threshold=3.774e+02, percent-clipped=2.0 2023-03-27 04:40:59,882 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132065.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:04,207 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132072.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:14,122 INFO [finetune.py:976] (6/7) Epoch 24, batch 350, loss[loss=0.1811, simple_loss=0.2577, pruned_loss=0.05227, over 4798.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2484, pruned_loss=0.05208, over 790754.51 frames. ], batch size: 45, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:41:17,755 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132092.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:32,278 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8127, 1.7664, 1.5095, 1.9844, 2.2644, 1.9670, 1.6598, 1.4692], device='cuda:6'), covar=tensor([0.2102, 0.1835, 0.1805, 0.1477, 0.1674, 0.1154, 0.2148, 0.1856], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0207, 0.0211, 0.0193, 0.0241, 0.0187, 0.0213, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:41:33,451 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132113.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:39,184 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132115.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:41:42,168 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132120.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:41:56,058 INFO [finetune.py:976] (6/7) Epoch 24, batch 400, loss[loss=0.1748, simple_loss=0.2492, pruned_loss=0.05016, over 4873.00 frames. ], tot_loss[loss=0.1773, simple_loss=0.2498, pruned_loss=0.05244, over 826836.06 frames. ], batch size: 34, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:42:03,865 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132148.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:42:11,268 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3784, 1.3096, 1.5579, 2.4746, 1.6886, 2.1386, 0.8930, 2.1078], device='cuda:6'), covar=tensor([0.1738, 0.1385, 0.1182, 0.0765, 0.0875, 0.1318, 0.1527, 0.0595], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0100, 0.0136, 0.0124, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 04:42:15,410 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.594e+02 1.883e+02 2.349e+02 4.739e+02, threshold=3.766e+02, percent-clipped=1.0 2023-03-27 04:42:29,845 INFO [finetune.py:976] (6/7) Epoch 24, batch 450, loss[loss=0.1667, simple_loss=0.2477, pruned_loss=0.04284, over 4777.00 frames. ], tot_loss[loss=0.1769, simple_loss=0.249, pruned_loss=0.05235, over 855565.20 frames. ], batch size: 26, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:42:36,329 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132196.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:42:38,165 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0481, 0.9824, 0.9375, 1.1394, 1.1894, 1.1272, 0.9959, 0.9255], device='cuda:6'), covar=tensor([0.0370, 0.0333, 0.0663, 0.0297, 0.0293, 0.0448, 0.0336, 0.0420], device='cuda:6'), in_proj_covar=tensor([0.0098, 0.0105, 0.0142, 0.0110, 0.0098, 0.0110, 0.0100, 0.0110], device='cuda:6'), out_proj_covar=tensor([7.6328e-05, 8.0439e-05, 1.1112e-04, 8.3876e-05, 7.6321e-05, 8.1177e-05, 7.4037e-05, 8.3915e-05], device='cuda:6') 2023-03-27 04:42:55,014 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=132213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:43:13,245 INFO [finetune.py:976] (6/7) Epoch 24, batch 500, loss[loss=0.1916, simple_loss=0.257, pruned_loss=0.06313, over 4895.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2466, pruned_loss=0.05169, over 879868.84 frames. ], batch size: 43, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:43:24,825 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 04:43:32,460 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.902e+01 1.519e+02 1.809e+02 2.135e+02 3.897e+02, threshold=3.617e+02, percent-clipped=1.0 2023-03-27 04:43:46,922 INFO [finetune.py:976] (6/7) Epoch 24, batch 550, loss[loss=0.1324, simple_loss=0.2085, pruned_loss=0.02818, over 4866.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2433, pruned_loss=0.05093, over 897687.17 frames. ], batch size: 31, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:44:28,331 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 04:44:30,208 INFO [finetune.py:976] (6/7) Epoch 24, batch 600, loss[loss=0.2208, simple_loss=0.2845, pruned_loss=0.07853, over 4837.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2448, pruned_loss=0.05135, over 910527.20 frames. ], batch size: 49, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:44:33,309 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5982, 1.2021, 0.9860, 1.4887, 1.9917, 1.5006, 1.5593, 1.6180], device='cuda:6'), covar=tensor([0.1442, 0.1996, 0.1749, 0.1198, 0.1957, 0.1972, 0.1332, 0.1820], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0118, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:44:58,205 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.737e+01 1.576e+02 1.859e+02 2.330e+02 3.343e+02, threshold=3.718e+02, percent-clipped=0.0 2023-03-27 04:45:12,586 INFO [finetune.py:976] (6/7) Epoch 24, batch 650, loss[loss=0.1757, simple_loss=0.2628, pruned_loss=0.04429, over 4809.00 frames. ], tot_loss[loss=0.1772, simple_loss=0.2487, pruned_loss=0.0528, over 921664.57 frames. ], batch size: 38, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:45:13,153 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=132387.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:45:28,117 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=132410.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:45:46,141 INFO [finetune.py:976] (6/7) Epoch 24, batch 700, loss[loss=0.1927, simple_loss=0.2547, pruned_loss=0.06536, over 4908.00 frames. ], tot_loss[loss=0.1775, simple_loss=0.2494, pruned_loss=0.05282, over 929725.66 frames. ], batch size: 32, lr: 3.07e-03, grad_scale: 16.0 2023-03-27 04:46:03,854 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.085e+02 1.592e+02 1.911e+02 2.262e+02 4.191e+02, threshold=3.822e+02, percent-clipped=2.0 2023-03-27 04:46:19,326 INFO [finetune.py:976] (6/7) Epoch 24, batch 750, loss[loss=0.2369, simple_loss=0.2938, pruned_loss=0.08996, over 4817.00 frames. ], tot_loss[loss=0.1788, simple_loss=0.2511, pruned_loss=0.05326, over 935435.44 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:46:36,492 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132513.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:46:46,108 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7192, 4.1522, 4.3005, 4.4719, 4.4498, 4.2016, 4.8144, 1.5914], device='cuda:6'), covar=tensor([0.0695, 0.0889, 0.0757, 0.0860, 0.1148, 0.1362, 0.0543, 0.5708], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0249, 0.0282, 0.0296, 0.0341, 0.0289, 0.0308, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:46:58,683 INFO [finetune.py:976] (6/7) Epoch 24, batch 800, loss[loss=0.1749, simple_loss=0.2533, pruned_loss=0.04826, over 4905.00 frames. ], tot_loss[loss=0.1768, simple_loss=0.2492, pruned_loss=0.05219, over 939763.92 frames. ], batch size: 37, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:47:17,622 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132561.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:47:19,960 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.775e+01 1.480e+02 1.729e+02 2.075e+02 4.531e+02, threshold=3.459e+02, percent-clipped=1.0 2023-03-27 04:47:35,957 INFO [finetune.py:976] (6/7) Epoch 24, batch 850, loss[loss=0.1397, simple_loss=0.2209, pruned_loss=0.02926, over 4758.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2467, pruned_loss=0.05118, over 945942.93 frames. ], batch size: 26, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:47:42,764 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6012, 1.5678, 2.2252, 1.9610, 1.8865, 4.3159, 1.5956, 1.8337], device='cuda:6'), covar=tensor([0.0947, 0.1737, 0.1154, 0.0921, 0.1600, 0.0137, 0.1458, 0.1798], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:48:18,614 INFO [finetune.py:976] (6/7) Epoch 24, batch 900, loss[loss=0.1553, simple_loss=0.2244, pruned_loss=0.04315, over 4806.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2431, pruned_loss=0.05017, over 947670.41 frames. ], batch size: 29, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:48:33,142 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132660.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:48:35,408 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.417e+02 1.718e+02 2.002e+02 3.598e+02, threshold=3.436e+02, percent-clipped=1.0 2023-03-27 04:48:44,378 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 04:48:52,530 INFO [finetune.py:976] (6/7) Epoch 24, batch 950, loss[loss=0.1669, simple_loss=0.2489, pruned_loss=0.04247, over 4747.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2412, pruned_loss=0.04962, over 949418.74 frames. ], batch size: 54, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:48:52,617 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132687.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:07,071 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=132710.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 04:49:14,314 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132721.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:26,440 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132735.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:28,060 INFO [finetune.py:976] (6/7) Epoch 24, batch 1000, loss[loss=0.1973, simple_loss=0.271, pruned_loss=0.06177, over 4732.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2427, pruned_loss=0.05044, over 949707.25 frames. ], batch size: 54, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:49:38,700 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=132746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:50,820 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=132758.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:49:50,851 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5845, 1.4205, 2.0561, 3.1140, 2.0328, 2.2202, 1.0326, 2.6106], device='cuda:6'), covar=tensor([0.1621, 0.1413, 0.1146, 0.0575, 0.0805, 0.1689, 0.1754, 0.0480], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0161, 0.0100, 0.0135, 0.0123, 0.0099], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 04:49:54,880 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.613e+02 1.803e+02 2.355e+02 4.590e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-27 04:50:17,568 INFO [finetune.py:976] (6/7) Epoch 24, batch 1050, loss[loss=0.189, simple_loss=0.2607, pruned_loss=0.05866, over 4802.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2452, pruned_loss=0.05026, over 953007.76 frames. ], batch size: 51, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:50:31,476 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=132807.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:50:46,157 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.6239, 4.0289, 4.2075, 4.3930, 4.3641, 4.1161, 4.7197, 1.6240], device='cuda:6'), covar=tensor([0.0703, 0.0911, 0.0824, 0.0857, 0.1104, 0.1490, 0.0538, 0.5546], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0248, 0.0282, 0.0294, 0.0340, 0.0287, 0.0307, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:50:51,433 INFO [finetune.py:976] (6/7) Epoch 24, batch 1100, loss[loss=0.1558, simple_loss=0.2211, pruned_loss=0.04528, over 4492.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2476, pruned_loss=0.05136, over 952635.73 frames. ], batch size: 20, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:51:08,749 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.623e+02 1.902e+02 2.304e+02 4.937e+02, threshold=3.804e+02, percent-clipped=2.0 2023-03-27 04:51:24,185 INFO [finetune.py:976] (6/7) Epoch 24, batch 1150, loss[loss=0.1697, simple_loss=0.242, pruned_loss=0.04865, over 4900.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2483, pruned_loss=0.05189, over 952741.99 frames. ], batch size: 37, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:51:28,504 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 04:51:57,324 INFO [finetune.py:976] (6/7) Epoch 24, batch 1200, loss[loss=0.1497, simple_loss=0.2229, pruned_loss=0.03823, over 4819.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2469, pruned_loss=0.0515, over 954158.39 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:52:24,718 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.055e+01 1.470e+02 1.716e+02 2.148e+02 3.548e+02, threshold=3.432e+02, percent-clipped=0.0 2023-03-27 04:52:40,276 INFO [finetune.py:976] (6/7) Epoch 24, batch 1250, loss[loss=0.1387, simple_loss=0.2002, pruned_loss=0.03861, over 4764.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.245, pruned_loss=0.05095, over 954607.84 frames. ], batch size: 26, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:52:59,662 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133016.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:53:03,397 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4924, 2.3442, 1.9537, 0.8228, 2.1280, 2.0515, 1.8287, 2.2588], device='cuda:6'), covar=tensor([0.0798, 0.0732, 0.1337, 0.2098, 0.1225, 0.1994, 0.2254, 0.0747], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0181, 0.0210, 0.0209, 0.0224, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:53:15,460 INFO [finetune.py:976] (6/7) Epoch 24, batch 1300, loss[loss=0.176, simple_loss=0.2441, pruned_loss=0.054, over 4752.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2418, pruned_loss=0.04963, over 957398.48 frames. ], batch size: 59, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:53:32,332 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1769, 4.6530, 4.5365, 2.5413, 4.7476, 3.8438, 0.9448, 3.2262], device='cuda:6'), covar=tensor([0.2274, 0.1984, 0.1444, 0.3173, 0.0815, 0.0782, 0.4852, 0.1488], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0179, 0.0162, 0.0129, 0.0161, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 04:53:42,214 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.345e+01 1.496e+02 1.852e+02 2.141e+02 4.041e+02, threshold=3.705e+02, percent-clipped=2.0 2023-03-27 04:53:57,208 INFO [finetune.py:976] (6/7) Epoch 24, batch 1350, loss[loss=0.1671, simple_loss=0.2463, pruned_loss=0.04396, over 4873.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2415, pruned_loss=0.04987, over 959115.61 frames. ], batch size: 34, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:54:07,945 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133102.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:54:23,232 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-27 04:54:31,058 INFO [finetune.py:976] (6/7) Epoch 24, batch 1400, loss[loss=0.249, simple_loss=0.3181, pruned_loss=0.09, over 4127.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2449, pruned_loss=0.05105, over 958708.57 frames. ], batch size: 65, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:54:34,130 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.9200, 4.3638, 4.5467, 4.7729, 4.6598, 4.2898, 5.0057, 1.4607], device='cuda:6'), covar=tensor([0.0691, 0.0729, 0.0753, 0.0753, 0.1115, 0.1595, 0.0565, 0.5943], device='cuda:6'), in_proj_covar=tensor([0.0349, 0.0248, 0.0282, 0.0294, 0.0339, 0.0288, 0.0308, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:54:59,477 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.355e+01 1.624e+02 1.914e+02 2.315e+02 3.947e+02, threshold=3.828e+02, percent-clipped=1.0 2023-03-27 04:54:59,592 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7188, 1.3604, 0.9523, 1.5820, 2.0516, 1.5203, 1.5555, 1.5427], device='cuda:6'), covar=tensor([0.1504, 0.2063, 0.1938, 0.1223, 0.2101, 0.2123, 0.1520, 0.2126], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:55:00,838 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4856, 1.0504, 0.7358, 1.2753, 1.9248, 0.7036, 1.2092, 1.3180], device='cuda:6'), covar=tensor([0.1573, 0.2146, 0.1825, 0.1324, 0.1915, 0.1976, 0.1635, 0.2037], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 04:55:19,624 INFO [finetune.py:976] (6/7) Epoch 24, batch 1450, loss[loss=0.1788, simple_loss=0.2635, pruned_loss=0.04706, over 4908.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2466, pruned_loss=0.05157, over 956029.56 frames. ], batch size: 37, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:55:56,690 INFO [finetune.py:976] (6/7) Epoch 24, batch 1500, loss[loss=0.1987, simple_loss=0.2755, pruned_loss=0.06096, over 4837.00 frames. ], tot_loss[loss=0.1759, simple_loss=0.2478, pruned_loss=0.05202, over 956761.82 frames. ], batch size: 44, lr: 3.06e-03, grad_scale: 16.0 2023-03-27 04:56:15,021 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.176e+02 1.579e+02 1.864e+02 2.355e+02 5.095e+02, threshold=3.727e+02, percent-clipped=2.0 2023-03-27 04:56:30,469 INFO [finetune.py:976] (6/7) Epoch 24, batch 1550, loss[loss=0.174, simple_loss=0.2472, pruned_loss=0.05036, over 4805.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2479, pruned_loss=0.05162, over 954519.54 frames. ], batch size: 40, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:56:50,672 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=133316.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:57:00,836 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8565, 3.3921, 3.5834, 3.7379, 3.5890, 3.4946, 3.9660, 1.3205], device='cuda:6'), covar=tensor([0.1023, 0.1019, 0.1053, 0.1120, 0.1556, 0.1723, 0.0902, 0.5919], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0248, 0.0282, 0.0294, 0.0339, 0.0287, 0.0307, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:57:04,266 INFO [finetune.py:976] (6/7) Epoch 24, batch 1600, loss[loss=0.1768, simple_loss=0.2531, pruned_loss=0.05022, over 4873.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2463, pruned_loss=0.05126, over 954144.57 frames. ], batch size: 34, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:57:08,637 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8274, 1.8112, 1.7367, 1.8071, 1.7176, 4.3071, 1.6972, 2.1153], device='cuda:6'), covar=tensor([0.3190, 0.2431, 0.2022, 0.2339, 0.1389, 0.0126, 0.2509, 0.1187], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0123, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0005, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 04:57:27,550 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.06 vs. limit=5.0 2023-03-27 04:57:28,471 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.451e+02 1.796e+02 2.043e+02 3.402e+02, threshold=3.593e+02, percent-clipped=0.0 2023-03-27 04:57:28,566 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=133364.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:57:38,808 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133374.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 04:57:40,028 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133376.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:57:40,704 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 04:57:46,648 INFO [finetune.py:976] (6/7) Epoch 24, batch 1650, loss[loss=0.1342, simple_loss=0.2149, pruned_loss=0.02675, over 4940.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2434, pruned_loss=0.05012, over 955243.39 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:57:56,894 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=133402.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:58:00,116 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-27 04:58:19,439 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133435.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 04:58:20,501 INFO [finetune.py:976] (6/7) Epoch 24, batch 1700, loss[loss=0.227, simple_loss=0.2862, pruned_loss=0.08387, over 4845.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2403, pruned_loss=0.04906, over 954795.79 frames. ], batch size: 47, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:58:20,620 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133437.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:58:31,232 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=133450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:58:48,615 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.499e+01 1.457e+02 1.770e+02 2.219e+02 3.253e+02, threshold=3.541e+02, percent-clipped=0.0 2023-03-27 04:58:55,867 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133474.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:59:04,125 INFO [finetune.py:976] (6/7) Epoch 24, batch 1750, loss[loss=0.1745, simple_loss=0.232, pruned_loss=0.05847, over 4013.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.242, pruned_loss=0.04971, over 955676.25 frames. ], batch size: 17, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:59:36,818 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133535.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 04:59:37,911 INFO [finetune.py:976] (6/7) Epoch 24, batch 1800, loss[loss=0.2176, simple_loss=0.2888, pruned_loss=0.0732, over 4913.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2457, pruned_loss=0.05024, over 957788.34 frames. ], batch size: 36, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 04:59:47,624 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7679, 1.0471, 1.8229, 1.8549, 1.6575, 1.5972, 1.7216, 1.7431], device='cuda:6'), covar=tensor([0.3891, 0.3975, 0.3383, 0.3601, 0.5013, 0.3871, 0.4344, 0.3033], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0244, 0.0263, 0.0287, 0.0287, 0.0263, 0.0295, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 04:59:57,744 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.575e+02 1.839e+02 2.282e+02 3.463e+02, threshold=3.677e+02, percent-clipped=0.0 2023-03-27 05:00:10,193 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5636, 1.5588, 1.4897, 1.5852, 0.9440, 3.2982, 1.2000, 1.6396], device='cuda:6'), covar=tensor([0.3321, 0.2440, 0.2104, 0.2379, 0.1874, 0.0222, 0.2557, 0.1249], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:00:23,458 INFO [finetune.py:976] (6/7) Epoch 24, batch 1850, loss[loss=0.1982, simple_loss=0.2769, pruned_loss=0.05972, over 4919.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2472, pruned_loss=0.05101, over 957081.12 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:01:04,053 INFO [finetune.py:976] (6/7) Epoch 24, batch 1900, loss[loss=0.1991, simple_loss=0.2822, pruned_loss=0.05796, over 4822.00 frames. ], tot_loss[loss=0.176, simple_loss=0.2489, pruned_loss=0.05159, over 954943.18 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:01:14,196 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133652.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:01:21,804 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.540e+02 1.881e+02 2.277e+02 3.366e+02, threshold=3.762e+02, percent-clipped=0.0 2023-03-27 05:01:37,659 INFO [finetune.py:976] (6/7) Epoch 24, batch 1950, loss[loss=0.1552, simple_loss=0.2301, pruned_loss=0.04011, over 4826.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2458, pruned_loss=0.04976, over 954552.29 frames. ], batch size: 33, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:01:55,015 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=133713.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:01:55,055 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2428, 2.1501, 1.8286, 1.9917, 1.9652, 1.9441, 2.0160, 2.8112], device='cuda:6'), covar=tensor([0.3587, 0.4016, 0.3180, 0.3535, 0.3638, 0.2462, 0.3563, 0.1550], device='cuda:6'), in_proj_covar=tensor([0.0292, 0.0263, 0.0235, 0.0277, 0.0259, 0.0228, 0.0255, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:02:06,303 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133730.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 05:02:07,531 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:02:11,409 INFO [finetune.py:976] (6/7) Epoch 24, batch 2000, loss[loss=0.1696, simple_loss=0.2374, pruned_loss=0.0509, over 4344.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2439, pruned_loss=0.04989, over 955303.86 frames. ], batch size: 65, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:02:21,107 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1802, 1.2728, 1.3561, 1.2846, 1.4038, 2.2449, 1.2646, 1.4214], device='cuda:6'), covar=tensor([0.0861, 0.1577, 0.1208, 0.0901, 0.1398, 0.0427, 0.1334, 0.1470], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0077, 0.0092, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:02:28,714 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.381e+01 1.373e+02 1.735e+02 2.243e+02 3.912e+02, threshold=3.469e+02, percent-clipped=2.0 2023-03-27 05:02:54,160 INFO [finetune.py:976] (6/7) Epoch 24, batch 2050, loss[loss=0.181, simple_loss=0.2422, pruned_loss=0.05983, over 4868.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2405, pruned_loss=0.04875, over 955131.47 frames. ], batch size: 34, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:02:55,149 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-27 05:03:02,094 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 05:03:23,704 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=133830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:03:23,760 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1430, 1.9083, 1.4782, 0.6049, 1.6047, 1.7761, 1.6439, 1.7799], device='cuda:6'), covar=tensor([0.0969, 0.0760, 0.1471, 0.1901, 0.1334, 0.2244, 0.2216, 0.0909], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0200, 0.0181, 0.0210, 0.0208, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:03:27,929 INFO [finetune.py:976] (6/7) Epoch 24, batch 2100, loss[loss=0.1982, simple_loss=0.259, pruned_loss=0.06871, over 4714.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2404, pruned_loss=0.0493, over 955135.88 frames. ], batch size: 59, lr: 3.06e-03, grad_scale: 32.0 2023-03-27 05:03:47,591 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.827e+01 1.541e+02 1.860e+02 2.293e+02 6.118e+02, threshold=3.720e+02, percent-clipped=3.0 2023-03-27 05:04:11,228 INFO [finetune.py:976] (6/7) Epoch 24, batch 2150, loss[loss=0.1445, simple_loss=0.2234, pruned_loss=0.0328, over 4757.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2447, pruned_loss=0.05139, over 955673.24 frames. ], batch size: 27, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:04:44,946 INFO [finetune.py:976] (6/7) Epoch 24, batch 2200, loss[loss=0.2096, simple_loss=0.2774, pruned_loss=0.07094, over 4902.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2471, pruned_loss=0.05196, over 956174.01 frames. ], batch size: 46, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:05:02,715 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.464e+02 1.824e+02 2.239e+02 3.694e+02, threshold=3.648e+02, percent-clipped=0.0 2023-03-27 05:05:21,866 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.59 vs. limit=5.0 2023-03-27 05:05:24,729 INFO [finetune.py:976] (6/7) Epoch 24, batch 2250, loss[loss=0.2207, simple_loss=0.2858, pruned_loss=0.07781, over 4685.00 frames. ], tot_loss[loss=0.1763, simple_loss=0.2483, pruned_loss=0.05214, over 956027.72 frames. ], batch size: 59, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:05:26,542 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1132, 1.2589, 1.4214, 1.2727, 1.3803, 2.4364, 1.2474, 1.4028], device='cuda:6'), covar=tensor([0.1030, 0.1836, 0.1011, 0.1015, 0.1730, 0.0348, 0.1491, 0.1844], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0077, 0.0092, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:05:35,430 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=133997.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:05:49,821 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134008.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:08,041 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134030.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 05:06:09,736 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134032.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:12,681 INFO [finetune.py:976] (6/7) Epoch 24, batch 2300, loss[loss=0.1632, simple_loss=0.2341, pruned_loss=0.04619, over 4778.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2484, pruned_loss=0.05146, over 954994.66 frames. ], batch size: 26, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:06:27,070 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134058.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:31,056 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.533e+02 1.723e+02 2.089e+02 4.293e+02, threshold=3.445e+02, percent-clipped=2.0 2023-03-27 05:06:40,132 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134078.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 05:06:40,872 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 05:06:41,359 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134080.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:06:46,521 INFO [finetune.py:976] (6/7) Epoch 24, batch 2350, loss[loss=0.1484, simple_loss=0.2209, pruned_loss=0.03789, over 4755.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.246, pruned_loss=0.0509, over 954048.25 frames. ], batch size: 28, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:06:51,964 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3216, 2.2017, 1.7786, 2.2262, 2.2189, 1.9287, 2.5674, 2.3331], device='cuda:6'), covar=tensor([0.1276, 0.2027, 0.2932, 0.2502, 0.2330, 0.1657, 0.2891, 0.1595], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0256, 0.0251, 0.0208, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:07:11,959 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6251, 1.8999, 1.5346, 1.5515, 2.1175, 2.1372, 1.9580, 1.7924], device='cuda:6'), covar=tensor([0.0412, 0.0311, 0.0565, 0.0348, 0.0317, 0.0528, 0.0310, 0.0402], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0144, 0.0111, 0.0100, 0.0112, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7136e-05, 8.1320e-05, 1.1261e-04, 8.5213e-05, 7.7331e-05, 8.3200e-05, 7.5527e-05, 8.5214e-05], device='cuda:6') 2023-03-27 05:07:14,977 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:07:19,177 INFO [finetune.py:976] (6/7) Epoch 24, batch 2400, loss[loss=0.1607, simple_loss=0.2298, pruned_loss=0.04579, over 4822.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2433, pruned_loss=0.05059, over 953790.71 frames. ], batch size: 30, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:07:38,331 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.553e+01 1.434e+02 1.789e+02 2.166e+02 3.942e+02, threshold=3.577e+02, percent-clipped=1.0 2023-03-27 05:07:49,931 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134178.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:07:55,475 INFO [finetune.py:976] (6/7) Epoch 24, batch 2450, loss[loss=0.131, simple_loss=0.2056, pruned_loss=0.02818, over 4814.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2402, pruned_loss=0.04971, over 956426.62 frames. ], batch size: 38, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:08:21,266 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 05:08:30,477 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7584, 1.5909, 1.4848, 1.5899, 1.9598, 1.9775, 1.7155, 1.4799], device='cuda:6'), covar=tensor([0.0374, 0.0416, 0.0576, 0.0361, 0.0284, 0.0453, 0.0377, 0.0512], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0144, 0.0112, 0.0100, 0.0113, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7307e-05, 8.1348e-05, 1.1270e-04, 8.5507e-05, 7.7585e-05, 8.3456e-05, 7.5878e-05, 8.5421e-05], device='cuda:6') 2023-03-27 05:08:36,906 INFO [finetune.py:976] (6/7) Epoch 24, batch 2500, loss[loss=0.2301, simple_loss=0.3058, pruned_loss=0.07715, over 4813.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2425, pruned_loss=0.05096, over 955121.61 frames. ], batch size: 51, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:08:43,415 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0111, 0.9555, 0.8894, 1.0827, 1.1819, 1.0974, 0.9627, 0.8727], device='cuda:6'), covar=tensor([0.0390, 0.0337, 0.0666, 0.0337, 0.0294, 0.0493, 0.0350, 0.0444], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0144, 0.0112, 0.0100, 0.0113, 0.0102, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7414e-05, 8.1424e-05, 1.1289e-04, 8.5600e-05, 7.7718e-05, 8.3559e-05, 7.6040e-05, 8.5586e-05], device='cuda:6') 2023-03-27 05:08:55,716 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.396e+01 1.499e+02 1.865e+02 2.171e+02 5.575e+02, threshold=3.730e+02, percent-clipped=1.0 2023-03-27 05:08:56,434 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0204, 1.9623, 2.0569, 1.2511, 2.0023, 2.0439, 2.0693, 1.7077], device='cuda:6'), covar=tensor([0.0600, 0.0663, 0.0678, 0.0917, 0.0768, 0.0736, 0.0599, 0.1078], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0137, 0.0140, 0.0120, 0.0127, 0.0138, 0.0139, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:09:20,385 INFO [finetune.py:976] (6/7) Epoch 24, batch 2550, loss[loss=0.1624, simple_loss=0.2376, pruned_loss=0.0436, over 4899.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2453, pruned_loss=0.05098, over 955594.26 frames. ], batch size: 35, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:09:20,770 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 05:09:32,244 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=134304.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:09:32,280 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9243, 1.0758, 1.9426, 1.8928, 1.7686, 1.6920, 1.7732, 1.8652], device='cuda:6'), covar=tensor([0.3716, 0.3803, 0.3178, 0.3398, 0.4371, 0.3605, 0.4185, 0.3018], device='cuda:6'), in_proj_covar=tensor([0.0259, 0.0244, 0.0264, 0.0288, 0.0288, 0.0264, 0.0296, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:09:34,672 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134308.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:09:35,301 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=134309.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:09:53,553 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7037, 4.1711, 3.8521, 2.0113, 4.2094, 3.2074, 0.8980, 3.0013], device='cuda:6'), covar=tensor([0.2361, 0.1802, 0.1440, 0.3398, 0.1023, 0.0952, 0.4585, 0.1278], device='cuda:6'), in_proj_covar=tensor([0.0154, 0.0180, 0.0162, 0.0130, 0.0161, 0.0125, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 05:09:54,095 INFO [finetune.py:976] (6/7) Epoch 24, batch 2600, loss[loss=0.1703, simple_loss=0.2425, pruned_loss=0.04906, over 4741.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2464, pruned_loss=0.05093, over 954937.21 frames. ], batch size: 27, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:10:04,831 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134353.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:07,201 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:12,059 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.471e+02 1.806e+02 2.184e+02 4.519e+02, threshold=3.612e+02, percent-clipped=2.0 2023-03-27 05:10:12,186 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6274, 2.8134, 2.7107, 1.8101, 2.5952, 2.8425, 2.8919, 2.4114], device='cuda:6'), covar=tensor([0.0578, 0.0531, 0.0661, 0.0964, 0.0789, 0.0703, 0.0600, 0.1009], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0138, 0.0141, 0.0121, 0.0128, 0.0140, 0.0141, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:10:13,309 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134365.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:16,829 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134370.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:10:29,856 INFO [finetune.py:976] (6/7) Epoch 24, batch 2650, loss[loss=0.1806, simple_loss=0.2454, pruned_loss=0.05791, over 4721.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2462, pruned_loss=0.05011, over 955580.36 frames. ], batch size: 54, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:11:21,024 INFO [finetune.py:976] (6/7) Epoch 24, batch 2700, loss[loss=0.1496, simple_loss=0.2268, pruned_loss=0.03618, over 4763.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2462, pruned_loss=0.05034, over 954958.08 frames. ], batch size: 26, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:11:29,429 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4148, 1.3674, 1.7534, 1.6976, 1.4871, 3.1835, 1.3517, 1.4621], device='cuda:6'), covar=tensor([0.1032, 0.1831, 0.1076, 0.1023, 0.1671, 0.0235, 0.1498, 0.1771], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:11:39,163 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.916e+01 1.416e+02 1.758e+02 2.159e+02 3.599e+02, threshold=3.516e+02, percent-clipped=0.0 2023-03-27 05:11:54,603 INFO [finetune.py:976] (6/7) Epoch 24, batch 2750, loss[loss=0.1293, simple_loss=0.1921, pruned_loss=0.03324, over 3994.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2429, pruned_loss=0.04945, over 955323.24 frames. ], batch size: 17, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:12:27,873 INFO [finetune.py:976] (6/7) Epoch 24, batch 2800, loss[loss=0.1662, simple_loss=0.2276, pruned_loss=0.05242, over 4814.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2396, pruned_loss=0.04842, over 956339.23 frames. ], batch size: 30, lr: 3.05e-03, grad_scale: 32.0 2023-03-27 05:12:32,177 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-27 05:12:38,522 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5292, 1.4336, 1.4043, 1.4103, 0.8433, 2.2884, 0.7936, 1.2587], device='cuda:6'), covar=tensor([0.3224, 0.2496, 0.2247, 0.2473, 0.1966, 0.0394, 0.2646, 0.1307], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:12:46,116 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.301e+01 1.456e+02 1.823e+02 2.196e+02 4.309e+02, threshold=3.645e+02, percent-clipped=3.0 2023-03-27 05:12:46,251 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6908, 1.6222, 1.3680, 1.6531, 1.9844, 1.9494, 1.6817, 1.4428], device='cuda:6'), covar=tensor([0.0346, 0.0358, 0.0591, 0.0309, 0.0209, 0.0430, 0.0344, 0.0459], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0107, 0.0145, 0.0113, 0.0101, 0.0114, 0.0103, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7975e-05, 8.1926e-05, 1.1380e-04, 8.6198e-05, 7.8490e-05, 8.4248e-05, 7.6571e-05, 8.6204e-05], device='cuda:6') 2023-03-27 05:13:01,599 INFO [finetune.py:976] (6/7) Epoch 24, batch 2850, loss[loss=0.1741, simple_loss=0.2499, pruned_loss=0.04917, over 4835.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2391, pruned_loss=0.04903, over 956905.99 frames. ], batch size: 47, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:13:45,378 INFO [finetune.py:976] (6/7) Epoch 24, batch 2900, loss[loss=0.1765, simple_loss=0.2557, pruned_loss=0.04868, over 4889.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.243, pruned_loss=0.05018, over 957321.25 frames. ], batch size: 32, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:13:50,729 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 05:13:56,310 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134653.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:00,546 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134660.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:03,944 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.559e+02 1.783e+02 2.063e+02 3.902e+02, threshold=3.566e+02, percent-clipped=1.0 2023-03-27 05:14:04,029 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=134665.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:20,897 INFO [finetune.py:976] (6/7) Epoch 24, batch 2950, loss[loss=0.1815, simple_loss=0.2524, pruned_loss=0.05533, over 4915.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2453, pruned_loss=0.05034, over 955920.59 frames. ], batch size: 42, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:14:37,442 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=134701.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:14:52,790 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6116, 2.4653, 1.9973, 2.5396, 2.4129, 2.1288, 2.9309, 2.6254], device='cuda:6'), covar=tensor([0.1210, 0.2158, 0.2859, 0.2657, 0.2472, 0.1592, 0.3184, 0.1639], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0255, 0.0251, 0.0207, 0.0214, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:15:02,626 INFO [finetune.py:976] (6/7) Epoch 24, batch 3000, loss[loss=0.1704, simple_loss=0.2515, pruned_loss=0.0447, over 4784.00 frames. ], tot_loss[loss=0.1745, simple_loss=0.2472, pruned_loss=0.05095, over 956700.13 frames. ], batch size: 29, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:15:02,626 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 05:15:13,333 INFO [finetune.py:1010] (6/7) Epoch 24, validation: loss=0.1561, simple_loss=0.2251, pruned_loss=0.0436, over 2265189.00 frames. 2023-03-27 05:15:13,334 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 05:15:15,769 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6198, 1.5278, 2.2472, 3.3257, 2.2779, 2.3984, 1.2603, 2.7401], device='cuda:6'), covar=tensor([0.1710, 0.1435, 0.1207, 0.0545, 0.0762, 0.1574, 0.1604, 0.0505], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0163, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 05:15:31,259 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.188e+02 1.527e+02 1.858e+02 2.241e+02 5.364e+02, threshold=3.716e+02, percent-clipped=3.0 2023-03-27 05:15:36,175 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7118, 2.5440, 2.0797, 2.8639, 2.6422, 2.2841, 3.0809, 2.7219], device='cuda:6'), covar=tensor([0.1237, 0.2221, 0.3051, 0.2370, 0.2550, 0.1615, 0.2759, 0.1611], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0255, 0.0251, 0.0207, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:15:48,109 INFO [finetune.py:976] (6/7) Epoch 24, batch 3050, loss[loss=0.1543, simple_loss=0.2143, pruned_loss=0.04718, over 3985.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2474, pruned_loss=0.05102, over 955300.36 frames. ], batch size: 17, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:15:58,440 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.37 vs. limit=5.0 2023-03-27 05:16:39,365 INFO [finetune.py:976] (6/7) Epoch 24, batch 3100, loss[loss=0.1851, simple_loss=0.2577, pruned_loss=0.05627, over 4889.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2464, pruned_loss=0.05164, over 955841.50 frames. ], batch size: 32, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:16:56,669 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=134862.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:16:58,376 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.835e+01 1.385e+02 1.668e+02 2.115e+02 5.080e+02, threshold=3.336e+02, percent-clipped=1.0 2023-03-27 05:17:12,756 INFO [finetune.py:976] (6/7) Epoch 24, batch 3150, loss[loss=0.1283, simple_loss=0.1953, pruned_loss=0.03058, over 3937.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2442, pruned_loss=0.0512, over 954159.45 frames. ], batch size: 17, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:17:30,522 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2788, 1.6147, 1.7058, 0.8004, 2.0302, 1.9060, 1.8515, 1.6824], device='cuda:6'), covar=tensor([0.0762, 0.0885, 0.0613, 0.0660, 0.0528, 0.0827, 0.0548, 0.0771], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0125, 0.0121, 0.0130, 0.0129, 0.0140, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.8256e-05, 1.0569e-04, 8.9446e-05, 8.4897e-05, 9.1159e-05, 9.1530e-05, 9.9788e-05, 1.0510e-04], device='cuda:6') 2023-03-27 05:17:32,344 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6577, 1.4189, 0.9214, 0.2628, 1.2206, 1.4908, 1.3182, 1.3072], device='cuda:6'), covar=tensor([0.1003, 0.1067, 0.1700, 0.2360, 0.1557, 0.2474, 0.2845, 0.1072], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0189, 0.0198, 0.0179, 0.0209, 0.0207, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:17:37,236 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=134923.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:17:46,591 INFO [finetune.py:976] (6/7) Epoch 24, batch 3200, loss[loss=0.1871, simple_loss=0.2461, pruned_loss=0.0641, over 4828.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2408, pruned_loss=0.04981, over 952926.04 frames. ], batch size: 33, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:18:03,030 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134960.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:04,201 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1669, 3.6165, 3.7984, 4.0504, 3.9522, 3.6059, 4.2362, 1.2924], device='cuda:6'), covar=tensor([0.0882, 0.0900, 0.0969, 0.1016, 0.1243, 0.1733, 0.0802, 0.5798], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0246, 0.0280, 0.0292, 0.0335, 0.0286, 0.0306, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:18:05,936 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.665e+01 1.513e+02 1.793e+02 2.252e+02 3.579e+02, threshold=3.586e+02, percent-clipped=1.0 2023-03-27 05:18:06,044 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=134965.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:22,478 INFO [finetune.py:976] (6/7) Epoch 24, batch 3250, loss[loss=0.1692, simple_loss=0.2306, pruned_loss=0.05386, over 4766.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2408, pruned_loss=0.04985, over 950464.18 frames. ], batch size: 54, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:18:45,537 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135008.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:46,188 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7489, 1.6602, 1.8750, 1.0743, 1.7488, 1.8289, 1.9048, 1.4520], device='cuda:6'), covar=tensor([0.0593, 0.0773, 0.0669, 0.0917, 0.0990, 0.0722, 0.0581, 0.1248], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0136, 0.0139, 0.0119, 0.0127, 0.0138, 0.0138, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:18:48,605 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135013.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:18:50,458 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7087, 1.6596, 1.6185, 1.6867, 1.2362, 3.7401, 1.5589, 1.9629], device='cuda:6'), covar=tensor([0.3297, 0.2588, 0.2095, 0.2398, 0.1815, 0.0200, 0.2651, 0.1259], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:19:03,028 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 05:19:04,067 INFO [finetune.py:976] (6/7) Epoch 24, batch 3300, loss[loss=0.1809, simple_loss=0.2573, pruned_loss=0.05224, over 4891.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.245, pruned_loss=0.05135, over 949057.45 frames. ], batch size: 32, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:19:23,521 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.563e+02 1.899e+02 2.275e+02 5.700e+02, threshold=3.799e+02, percent-clipped=2.0 2023-03-27 05:19:44,194 INFO [finetune.py:976] (6/7) Epoch 24, batch 3350, loss[loss=0.1877, simple_loss=0.2532, pruned_loss=0.06108, over 4845.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2465, pruned_loss=0.05175, over 948965.88 frames. ], batch size: 31, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:20:21,442 INFO [finetune.py:976] (6/7) Epoch 24, batch 3400, loss[loss=0.1991, simple_loss=0.2615, pruned_loss=0.06831, over 4813.00 frames. ], tot_loss[loss=0.178, simple_loss=0.2497, pruned_loss=0.05315, over 951857.32 frames. ], batch size: 38, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:20:38,716 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0167, 2.0317, 1.6532, 1.8878, 1.8411, 1.8595, 1.8964, 2.6579], device='cuda:6'), covar=tensor([0.3475, 0.3753, 0.3031, 0.3668, 0.3727, 0.2326, 0.3469, 0.1534], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0262, 0.0234, 0.0275, 0.0256, 0.0226, 0.0253, 0.0235], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:20:40,369 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.077e+02 1.584e+02 1.828e+02 2.150e+02 3.792e+02, threshold=3.656e+02, percent-clipped=0.0 2023-03-27 05:20:44,636 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:20:48,457 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 05:20:54,276 INFO [finetune.py:976] (6/7) Epoch 24, batch 3450, loss[loss=0.1892, simple_loss=0.2649, pruned_loss=0.05673, over 4909.00 frames. ], tot_loss[loss=0.1778, simple_loss=0.2497, pruned_loss=0.05294, over 951813.16 frames. ], batch size: 37, lr: 3.05e-03, grad_scale: 16.0 2023-03-27 05:21:27,791 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=135218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:21:40,738 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=135232.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:21:47,100 INFO [finetune.py:976] (6/7) Epoch 24, batch 3500, loss[loss=0.1334, simple_loss=0.208, pruned_loss=0.02938, over 4907.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2459, pruned_loss=0.05135, over 952212.11 frames. ], batch size: 32, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:21:54,577 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.64 vs. limit=5.0 2023-03-27 05:22:06,083 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.678e+01 1.506e+02 1.714e+02 2.011e+02 3.544e+02, threshold=3.428e+02, percent-clipped=0.0 2023-03-27 05:22:20,457 INFO [finetune.py:976] (6/7) Epoch 24, batch 3550, loss[loss=0.1803, simple_loss=0.2452, pruned_loss=0.0577, over 4904.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2423, pruned_loss=0.05016, over 954399.02 frames. ], batch size: 43, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:22:54,386 INFO [finetune.py:976] (6/7) Epoch 24, batch 3600, loss[loss=0.1676, simple_loss=0.2352, pruned_loss=0.05002, over 4823.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2387, pruned_loss=0.0485, over 955878.77 frames. ], batch size: 51, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:23:06,379 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8662, 1.4362, 1.9455, 1.8934, 1.6696, 1.6398, 1.8192, 1.8140], device='cuda:6'), covar=tensor([0.3864, 0.3616, 0.2902, 0.3434, 0.4414, 0.3763, 0.4123, 0.2752], device='cuda:6'), in_proj_covar=tensor([0.0260, 0.0244, 0.0264, 0.0290, 0.0289, 0.0265, 0.0296, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:23:10,590 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5473, 3.9838, 4.1654, 4.4232, 4.3058, 4.0673, 4.6732, 1.4078], device='cuda:6'), covar=tensor([0.0746, 0.0751, 0.0882, 0.0888, 0.1237, 0.1558, 0.0678, 0.5866], device='cuda:6'), in_proj_covar=tensor([0.0345, 0.0245, 0.0280, 0.0291, 0.0336, 0.0284, 0.0305, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:23:12,797 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.699e+01 1.474e+02 1.759e+02 2.084e+02 3.295e+02, threshold=3.517e+02, percent-clipped=0.0 2023-03-27 05:23:28,231 INFO [finetune.py:976] (6/7) Epoch 24, batch 3650, loss[loss=0.1489, simple_loss=0.2348, pruned_loss=0.03151, over 4857.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2408, pruned_loss=0.04921, over 954322.58 frames. ], batch size: 44, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:23:57,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5566, 1.5150, 2.1360, 3.1498, 2.0873, 2.1958, 1.0634, 2.6235], device='cuda:6'), covar=tensor([0.1903, 0.1432, 0.1210, 0.0606, 0.0849, 0.1560, 0.1826, 0.0546], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0101, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 05:24:11,249 INFO [finetune.py:976] (6/7) Epoch 24, batch 3700, loss[loss=0.1852, simple_loss=0.2586, pruned_loss=0.05593, over 4752.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2444, pruned_loss=0.05047, over 955562.93 frames. ], batch size: 59, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:24:28,522 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.118e+01 1.614e+02 1.999e+02 2.429e+02 5.138e+02, threshold=3.998e+02, percent-clipped=6.0 2023-03-27 05:24:43,337 INFO [finetune.py:976] (6/7) Epoch 24, batch 3750, loss[loss=0.1779, simple_loss=0.2516, pruned_loss=0.05213, over 4790.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2464, pruned_loss=0.05119, over 954961.13 frames. ], batch size: 25, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:25:12,913 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=135518.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:25:18,855 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=135527.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:25:18,863 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9946, 1.8129, 2.3740, 3.5032, 2.4546, 2.6958, 1.4977, 2.8442], device='cuda:6'), covar=tensor([0.1502, 0.1248, 0.1172, 0.0530, 0.0670, 0.2152, 0.1469, 0.0469], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0163, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 05:25:26,760 INFO [finetune.py:976] (6/7) Epoch 24, batch 3800, loss[loss=0.1398, simple_loss=0.2137, pruned_loss=0.03292, over 4669.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2478, pruned_loss=0.05157, over 954705.84 frames. ], batch size: 23, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:25:32,987 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4594, 1.5175, 1.2470, 1.4417, 1.7819, 1.7055, 1.4523, 1.3210], device='cuda:6'), covar=tensor([0.0373, 0.0307, 0.0685, 0.0310, 0.0236, 0.0524, 0.0357, 0.0445], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0108, 0.0148, 0.0114, 0.0102, 0.0116, 0.0104, 0.0115], device='cuda:6'), out_proj_covar=tensor([7.8407e-05, 8.2709e-05, 1.1543e-04, 8.7079e-05, 7.9461e-05, 8.5885e-05, 7.7439e-05, 8.7083e-05], device='cuda:6') 2023-03-27 05:25:36,985 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 05:25:44,706 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.524e+02 1.815e+02 2.221e+02 4.659e+02, threshold=3.630e+02, percent-clipped=3.0 2023-03-27 05:25:45,391 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135566.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:26:00,454 INFO [finetune.py:976] (6/7) Epoch 24, batch 3850, loss[loss=0.1462, simple_loss=0.2192, pruned_loss=0.03658, over 4781.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2461, pruned_loss=0.05055, over 955512.29 frames. ], batch size: 28, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:26:18,020 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1615, 1.8210, 2.1544, 2.1473, 1.8821, 1.9153, 2.0926, 2.0161], device='cuda:6'), covar=tensor([0.3894, 0.4122, 0.3046, 0.3732, 0.4807, 0.3935, 0.4666, 0.2744], device='cuda:6'), in_proj_covar=tensor([0.0261, 0.0244, 0.0265, 0.0290, 0.0289, 0.0266, 0.0296, 0.0249], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:26:38,634 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135630.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:26:45,794 INFO [finetune.py:976] (6/7) Epoch 24, batch 3900, loss[loss=0.1868, simple_loss=0.258, pruned_loss=0.05783, over 4899.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2441, pruned_loss=0.0504, over 954478.96 frames. ], batch size: 36, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:27:08,427 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9085, 1.4403, 1.9988, 1.9920, 1.7414, 1.7035, 1.9185, 1.9012], device='cuda:6'), covar=tensor([0.3769, 0.3968, 0.3095, 0.3634, 0.4559, 0.3731, 0.4227, 0.2942], device='cuda:6'), in_proj_covar=tensor([0.0262, 0.0245, 0.0266, 0.0291, 0.0290, 0.0267, 0.0297, 0.0250], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:27:10,711 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.672e+01 1.400e+02 1.667e+02 1.961e+02 4.314e+02, threshold=3.334e+02, percent-clipped=1.0 2023-03-27 05:27:26,034 INFO [finetune.py:976] (6/7) Epoch 24, batch 3950, loss[loss=0.134, simple_loss=0.2038, pruned_loss=0.03212, over 4824.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2414, pruned_loss=0.04957, over 957105.03 frames. ], batch size: 25, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:27:29,116 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=135691.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:27:39,365 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6283, 3.2994, 3.2599, 1.9582, 3.4399, 2.7261, 1.3069, 2.4326], device='cuda:6'), covar=tensor([0.3115, 0.1987, 0.1342, 0.2617, 0.1045, 0.0901, 0.3482, 0.1372], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0180, 0.0162, 0.0130, 0.0162, 0.0124, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 05:27:57,305 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5656, 1.6025, 2.1450, 3.2521, 2.1935, 2.3322, 0.9578, 2.6387], device='cuda:6'), covar=tensor([0.1715, 0.1222, 0.1222, 0.0554, 0.0771, 0.1377, 0.1811, 0.0506], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0132, 0.0162, 0.0100, 0.0135, 0.0123, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 05:27:58,422 INFO [finetune.py:976] (6/7) Epoch 24, batch 4000, loss[loss=0.1691, simple_loss=0.2438, pruned_loss=0.04719, over 4736.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2421, pruned_loss=0.05057, over 955647.71 frames. ], batch size: 59, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:28:16,421 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.536e+01 1.548e+02 1.897e+02 2.315e+02 3.943e+02, threshold=3.793e+02, percent-clipped=5.0 2023-03-27 05:28:31,232 INFO [finetune.py:976] (6/7) Epoch 24, batch 4050, loss[loss=0.18, simple_loss=0.2627, pruned_loss=0.0486, over 4898.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2455, pruned_loss=0.05148, over 955740.36 frames. ], batch size: 35, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:28:59,140 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=135827.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:29:09,952 INFO [finetune.py:976] (6/7) Epoch 24, batch 4100, loss[loss=0.1706, simple_loss=0.246, pruned_loss=0.04757, over 4890.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2468, pruned_loss=0.0513, over 957414.38 frames. ], batch size: 32, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:29:16,210 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4851, 1.4400, 1.4155, 1.3808, 0.7534, 2.2679, 0.7662, 1.1535], device='cuda:6'), covar=tensor([0.3198, 0.2545, 0.2181, 0.2536, 0.2076, 0.0383, 0.2495, 0.1333], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0114, 0.0096, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:29:32,645 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.978e+01 1.562e+02 1.866e+02 2.353e+02 4.250e+02, threshold=3.731e+02, percent-clipped=2.0 2023-03-27 05:29:39,220 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=135875.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:29:41,611 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6473, 1.6447, 1.8043, 1.0360, 1.8256, 1.8964, 1.9107, 1.6359], device='cuda:6'), covar=tensor([0.0788, 0.0572, 0.0518, 0.0474, 0.0512, 0.0621, 0.0408, 0.0560], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0146, 0.0124, 0.0120, 0.0129, 0.0127, 0.0139, 0.0146], device='cuda:6'), out_proj_covar=tensor([8.7844e-05, 1.0535e-04, 8.8810e-05, 8.4081e-05, 9.0745e-05, 9.0620e-05, 9.9101e-05, 1.0452e-04], device='cuda:6') 2023-03-27 05:29:46,949 INFO [finetune.py:976] (6/7) Epoch 24, batch 4150, loss[loss=0.1587, simple_loss=0.2443, pruned_loss=0.03655, over 4806.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2462, pruned_loss=0.05096, over 954758.10 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:30:30,442 INFO [finetune.py:976] (6/7) Epoch 24, batch 4200, loss[loss=0.1449, simple_loss=0.2338, pruned_loss=0.02805, over 4921.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2462, pruned_loss=0.05062, over 954020.49 frames. ], batch size: 41, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:30:40,108 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-27 05:30:48,006 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 05:30:49,318 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.053e+01 1.587e+02 1.796e+02 2.438e+02 3.967e+02, threshold=3.591e+02, percent-clipped=1.0 2023-03-27 05:30:57,351 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 05:31:00,643 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135982.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:03,039 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=135986.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:03,578 INFO [finetune.py:976] (6/7) Epoch 24, batch 4250, loss[loss=0.2204, simple_loss=0.2743, pruned_loss=0.0832, over 4902.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2446, pruned_loss=0.05008, over 954735.78 frames. ], batch size: 46, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:31:08,462 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=135994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:45,376 INFO [finetune.py:976] (6/7) Epoch 24, batch 4300, loss[loss=0.164, simple_loss=0.2314, pruned_loss=0.04827, over 4818.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2422, pruned_loss=0.04927, over 957193.66 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:31:49,208 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136043.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:31:49,264 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 05:32:03,105 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136055.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:32:04,965 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 05:32:14,140 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.579e+01 1.491e+02 1.827e+02 2.181e+02 5.621e+02, threshold=3.653e+02, percent-clipped=1.0 2023-03-27 05:32:20,771 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 05:32:31,267 INFO [finetune.py:976] (6/7) Epoch 24, batch 4350, loss[loss=0.1584, simple_loss=0.2276, pruned_loss=0.0446, over 4716.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2394, pruned_loss=0.04836, over 957512.69 frames. ], batch size: 23, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:32:47,256 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3395, 1.4469, 1.9490, 1.7100, 1.5769, 3.5550, 1.3638, 1.6456], device='cuda:6'), covar=tensor([0.1010, 0.1729, 0.1126, 0.0950, 0.1573, 0.0218, 0.1446, 0.1715], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0072, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:33:04,534 INFO [finetune.py:976] (6/7) Epoch 24, batch 4400, loss[loss=0.2063, simple_loss=0.2652, pruned_loss=0.0737, over 3954.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2416, pruned_loss=0.04997, over 956281.98 frames. ], batch size: 65, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:33:08,154 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136142.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:33:19,881 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9974, 1.4687, 2.0781, 2.0176, 1.8111, 1.8056, 1.9807, 1.9376], device='cuda:6'), covar=tensor([0.3997, 0.4144, 0.3292, 0.3736, 0.5143, 0.3694, 0.4603, 0.3109], device='cuda:6'), in_proj_covar=tensor([0.0261, 0.0244, 0.0264, 0.0289, 0.0289, 0.0266, 0.0296, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:33:23,890 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.540e+02 1.819e+02 2.170e+02 3.954e+02, threshold=3.638e+02, percent-clipped=3.0 2023-03-27 05:33:37,770 INFO [finetune.py:976] (6/7) Epoch 24, batch 4450, loss[loss=0.2212, simple_loss=0.2899, pruned_loss=0.07622, over 4926.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2456, pruned_loss=0.051, over 958105.79 frames. ], batch size: 38, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:33:39,852 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.84 vs. limit=5.0 2023-03-27 05:33:45,540 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8866, 3.3959, 3.6224, 3.7776, 3.6341, 3.4402, 3.9530, 1.1951], device='cuda:6'), covar=tensor([0.0961, 0.0985, 0.1031, 0.1026, 0.1558, 0.1721, 0.0893, 0.5784], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0248, 0.0284, 0.0296, 0.0339, 0.0288, 0.0308, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:33:48,785 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136203.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:34:13,143 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3894, 2.2579, 1.8275, 2.1909, 2.2072, 1.9953, 2.5565, 2.4553], device='cuda:6'), covar=tensor([0.1270, 0.1953, 0.2874, 0.2520, 0.2549, 0.1626, 0.3238, 0.1548], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0255, 0.0252, 0.0207, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:34:13,598 INFO [finetune.py:976] (6/7) Epoch 24, batch 4500, loss[loss=0.1688, simple_loss=0.2411, pruned_loss=0.04826, over 4921.00 frames. ], tot_loss[loss=0.1761, simple_loss=0.2481, pruned_loss=0.05204, over 959865.16 frames. ], batch size: 33, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:34:39,505 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.886e+01 1.509e+02 1.852e+02 2.239e+02 3.856e+02, threshold=3.704e+02, percent-clipped=1.0 2023-03-27 05:34:54,273 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:34:54,762 INFO [finetune.py:976] (6/7) Epoch 24, batch 4550, loss[loss=0.2012, simple_loss=0.2669, pruned_loss=0.06782, over 4729.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2471, pruned_loss=0.05131, over 957975.24 frames. ], batch size: 59, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:35:28,228 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136334.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:28,897 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:30,003 INFO [finetune.py:976] (6/7) Epoch 24, batch 4600, loss[loss=0.2041, simple_loss=0.2625, pruned_loss=0.07282, over 4798.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2461, pruned_loss=0.05081, over 956300.07 frames. ], batch size: 45, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:35:35,220 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136338.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:45,767 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:35:45,826 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5614, 1.6864, 1.3370, 1.6165, 1.9111, 1.9235, 1.6284, 1.4799], device='cuda:6'), covar=tensor([0.0304, 0.0285, 0.0623, 0.0320, 0.0197, 0.0386, 0.0279, 0.0371], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0105, 0.0144, 0.0110, 0.0100, 0.0113, 0.0101, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.6718e-05, 8.0602e-05, 1.1257e-04, 8.4590e-05, 7.7356e-05, 8.3429e-05, 7.5131e-05, 8.4879e-05], device='cuda:6') 2023-03-27 05:35:56,276 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.624e+02 1.856e+02 2.259e+02 4.732e+02, threshold=3.713e+02, percent-clipped=2.0 2023-03-27 05:36:11,518 INFO [finetune.py:976] (6/7) Epoch 24, batch 4650, loss[loss=0.2304, simple_loss=0.2819, pruned_loss=0.08943, over 4182.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2442, pruned_loss=0.05059, over 956334.75 frames. ], batch size: 65, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:36:17,127 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136396.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:36:29,379 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 05:36:45,427 INFO [finetune.py:976] (6/7) Epoch 24, batch 4700, loss[loss=0.1629, simple_loss=0.2326, pruned_loss=0.04661, over 4906.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2421, pruned_loss=0.05008, over 957538.84 frames. ], batch size: 35, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:37:13,732 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.054e+02 1.479e+02 1.754e+02 2.064e+02 3.231e+02, threshold=3.507e+02, percent-clipped=0.0 2023-03-27 05:37:38,206 INFO [finetune.py:976] (6/7) Epoch 24, batch 4750, loss[loss=0.1747, simple_loss=0.2612, pruned_loss=0.04405, over 4741.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2412, pruned_loss=0.05005, over 958229.92 frames. ], batch size: 59, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:37:39,003 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 05:37:44,925 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136498.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:38:02,852 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6676, 0.9529, 1.5542, 1.5645, 1.3782, 1.3489, 1.4863, 1.5514], device='cuda:6'), covar=tensor([0.4436, 0.4467, 0.4042, 0.4164, 0.5665, 0.4775, 0.5144, 0.3807], device='cuda:6'), in_proj_covar=tensor([0.0261, 0.0244, 0.0265, 0.0290, 0.0289, 0.0266, 0.0296, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:38:10,343 INFO [finetune.py:976] (6/7) Epoch 24, batch 4800, loss[loss=0.1976, simple_loss=0.2653, pruned_loss=0.06495, over 4897.00 frames. ], tot_loss[loss=0.173, simple_loss=0.244, pruned_loss=0.05095, over 956167.82 frames. ], batch size: 35, lr: 3.04e-03, grad_scale: 16.0 2023-03-27 05:38:17,387 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.96 vs. limit=2.0 2023-03-27 05:38:28,976 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.582e+02 2.021e+02 2.347e+02 5.093e+02, threshold=4.042e+02, percent-clipped=3.0 2023-03-27 05:38:44,076 INFO [finetune.py:976] (6/7) Epoch 24, batch 4850, loss[loss=0.1895, simple_loss=0.2341, pruned_loss=0.07243, over 4247.00 frames. ], tot_loss[loss=0.1754, simple_loss=0.2469, pruned_loss=0.05197, over 954569.99 frames. ], batch size: 18, lr: 3.04e-03, grad_scale: 32.0 2023-03-27 05:38:58,737 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6468, 1.5086, 1.5272, 1.5727, 1.1637, 3.6284, 1.4175, 1.8193], device='cuda:6'), covar=tensor([0.3119, 0.2506, 0.2088, 0.2302, 0.1750, 0.0197, 0.2610, 0.1278], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0117, 0.0122, 0.0124, 0.0114, 0.0097, 0.0095, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:39:05,381 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.25 vs. limit=5.0 2023-03-27 05:39:17,547 INFO [finetune.py:976] (6/7) Epoch 24, batch 4900, loss[loss=0.1963, simple_loss=0.2722, pruned_loss=0.06025, over 4738.00 frames. ], tot_loss[loss=0.1751, simple_loss=0.2472, pruned_loss=0.05151, over 955044.85 frames. ], batch size: 54, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:39:18,276 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136638.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:39:22,266 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:39:26,544 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136650.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:39:42,308 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.600e+02 1.925e+02 2.438e+02 3.559e+02, threshold=3.849e+02, percent-clipped=0.0 2023-03-27 05:39:55,065 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 05:39:57,295 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.81 vs. limit=2.0 2023-03-27 05:39:59,920 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136686.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:00,476 INFO [finetune.py:976] (6/7) Epoch 24, batch 4950, loss[loss=0.1919, simple_loss=0.2604, pruned_loss=0.0617, over 4313.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2479, pruned_loss=0.05139, over 954444.20 frames. ], batch size: 66, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:40:03,950 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136691.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:05,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9657, 3.7357, 3.2997, 2.2446, 3.5592, 3.0280, 2.8929, 3.3787], device='cuda:6'), covar=tensor([0.0525, 0.0600, 0.1297, 0.1670, 0.1054, 0.1496, 0.1456, 0.0725], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0190, 0.0198, 0.0180, 0.0209, 0.0208, 0.0222, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:40:08,770 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136698.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:12,485 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:40:33,841 INFO [finetune.py:976] (6/7) Epoch 24, batch 5000, loss[loss=0.1249, simple_loss=0.1895, pruned_loss=0.03009, over 4200.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2453, pruned_loss=0.05096, over 951875.64 frames. ], batch size: 18, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:40:54,396 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.7755, 3.3006, 3.4672, 3.6533, 3.5744, 3.3538, 3.8498, 1.2254], device='cuda:6'), covar=tensor([0.0883, 0.0893, 0.1055, 0.1126, 0.1317, 0.1620, 0.0938, 0.5493], device='cuda:6'), in_proj_covar=tensor([0.0346, 0.0245, 0.0282, 0.0292, 0.0337, 0.0284, 0.0305, 0.0300], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:41:02,608 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.076e+02 1.520e+02 1.782e+02 2.173e+02 3.913e+02, threshold=3.563e+02, percent-clipped=1.0 2023-03-27 05:41:17,087 INFO [finetune.py:976] (6/7) Epoch 24, batch 5050, loss[loss=0.1879, simple_loss=0.2525, pruned_loss=0.06161, over 4910.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2426, pruned_loss=0.05022, over 947562.31 frames. ], batch size: 46, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:41:20,077 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4237, 1.4368, 1.9269, 1.7149, 1.5967, 3.3140, 1.4076, 1.6043], device='cuda:6'), covar=tensor([0.1052, 0.1889, 0.0976, 0.0948, 0.1599, 0.0271, 0.1470, 0.1765], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:41:25,184 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136798.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:41:30,693 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=136806.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:41:41,057 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0792, 1.8498, 2.0918, 1.5824, 2.0496, 2.2842, 2.2072, 1.6646], device='cuda:6'), covar=tensor([0.0476, 0.0611, 0.0597, 0.0767, 0.0850, 0.0490, 0.0457, 0.1030], device='cuda:6'), in_proj_covar=tensor([0.0129, 0.0135, 0.0137, 0.0117, 0.0124, 0.0136, 0.0136, 0.0159], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:41:49,841 INFO [finetune.py:976] (6/7) Epoch 24, batch 5100, loss[loss=0.1501, simple_loss=0.2123, pruned_loss=0.04401, over 4389.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2405, pruned_loss=0.05, over 948433.30 frames. ], batch size: 19, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:41:56,304 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=136846.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:42:11,833 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.576e+02 1.921e+02 2.257e+02 4.191e+02, threshold=3.841e+02, percent-clipped=1.0 2023-03-27 05:42:17,785 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=136867.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:42:35,178 INFO [finetune.py:976] (6/7) Epoch 24, batch 5150, loss[loss=0.1602, simple_loss=0.2315, pruned_loss=0.04442, over 4775.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2402, pruned_loss=0.04992, over 949031.90 frames. ], batch size: 28, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:43:00,313 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1918, 2.0982, 2.2243, 1.6050, 2.0185, 2.3391, 2.2981, 1.7642], device='cuda:6'), covar=tensor([0.0556, 0.0620, 0.0624, 0.0780, 0.0702, 0.0692, 0.0549, 0.1110], device='cuda:6'), in_proj_covar=tensor([0.0128, 0.0134, 0.0136, 0.0116, 0.0124, 0.0136, 0.0135, 0.0158], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:43:16,539 INFO [finetune.py:976] (6/7) Epoch 24, batch 5200, loss[loss=0.1546, simple_loss=0.2323, pruned_loss=0.03843, over 4791.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2426, pruned_loss=0.05003, over 949527.66 frames. ], batch size: 29, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:43:33,221 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2590, 1.5894, 0.8479, 1.9356, 2.4154, 1.8099, 1.7881, 1.9253], device='cuda:6'), covar=tensor([0.1399, 0.1903, 0.1989, 0.1161, 0.1881, 0.1956, 0.1357, 0.1918], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 05:43:35,514 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.133e+02 1.647e+02 1.940e+02 2.397e+02 3.428e+02, threshold=3.879e+02, percent-clipped=0.0 2023-03-27 05:43:48,851 INFO [finetune.py:976] (6/7) Epoch 24, batch 5250, loss[loss=0.2288, simple_loss=0.2986, pruned_loss=0.07953, over 4917.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2444, pruned_loss=0.05068, over 949990.43 frames. ], batch size: 42, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:43:51,884 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=136991.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:43:57,217 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=136999.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:44:22,623 INFO [finetune.py:976] (6/7) Epoch 24, batch 5300, loss[loss=0.2038, simple_loss=0.2835, pruned_loss=0.06204, over 4798.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2469, pruned_loss=0.05143, over 952725.04 frames. ], batch size: 45, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:44:23,947 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=137039.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:44:42,413 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.600e+02 1.832e+02 2.198e+02 3.821e+02, threshold=3.665e+02, percent-clipped=0.0 2023-03-27 05:45:04,669 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6037, 1.1062, 0.8071, 1.4049, 1.9805, 1.0646, 1.3345, 1.4994], device='cuda:6'), covar=tensor([0.1527, 0.2105, 0.1877, 0.1287, 0.2011, 0.1988, 0.1510, 0.1933], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 05:45:05,288 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0150, 1.4274, 0.9027, 1.6476, 2.2451, 1.6479, 1.6871, 1.6667], device='cuda:6'), covar=tensor([0.1429, 0.1989, 0.1870, 0.1296, 0.1936, 0.1806, 0.1458, 0.2033], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 05:45:05,793 INFO [finetune.py:976] (6/7) Epoch 24, batch 5350, loss[loss=0.135, simple_loss=0.2098, pruned_loss=0.03015, over 4665.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2477, pruned_loss=0.05149, over 954751.41 frames. ], batch size: 23, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:45:38,838 INFO [finetune.py:976] (6/7) Epoch 24, batch 5400, loss[loss=0.2024, simple_loss=0.2588, pruned_loss=0.07295, over 4275.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.246, pruned_loss=0.05093, over 956253.55 frames. ], batch size: 65, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:45:57,157 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=137162.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:45:58,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.720e+01 1.449e+02 1.770e+02 2.209e+02 4.288e+02, threshold=3.541e+02, percent-clipped=1.0 2023-03-27 05:46:22,826 INFO [finetune.py:976] (6/7) Epoch 24, batch 5450, loss[loss=0.1787, simple_loss=0.2491, pruned_loss=0.05419, over 4861.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2434, pruned_loss=0.05035, over 954783.31 frames. ], batch size: 44, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:46:31,923 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6552, 1.5466, 1.3902, 1.7659, 2.0747, 1.7092, 1.3638, 1.3956], device='cuda:6'), covar=tensor([0.2345, 0.2179, 0.2257, 0.1733, 0.1553, 0.1360, 0.2689, 0.2008], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0210, 0.0213, 0.0194, 0.0242, 0.0188, 0.0214, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:46:37,303 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8199, 1.5873, 2.0999, 1.3570, 1.9609, 2.1386, 1.5106, 2.2629], device='cuda:6'), covar=tensor([0.1153, 0.2128, 0.1169, 0.1781, 0.0733, 0.1129, 0.2959, 0.0693], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0206, 0.0191, 0.0189, 0.0174, 0.0213, 0.0216, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:46:51,331 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-27 05:46:55,995 INFO [finetune.py:976] (6/7) Epoch 24, batch 5500, loss[loss=0.1815, simple_loss=0.2579, pruned_loss=0.05255, over 4905.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2405, pruned_loss=0.04925, over 954110.53 frames. ], batch size: 37, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:47:13,426 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.493e+02 1.897e+02 2.213e+02 3.719e+02, threshold=3.794e+02, percent-clipped=2.0 2023-03-27 05:47:14,034 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4226, 1.3140, 1.2699, 1.3296, 1.5932, 1.5423, 1.3639, 1.2540], device='cuda:6'), covar=tensor([0.0434, 0.0367, 0.0715, 0.0356, 0.0279, 0.0503, 0.0383, 0.0432], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0145, 0.0111, 0.0101, 0.0113, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7175e-05, 8.1238e-05, 1.1309e-04, 8.5183e-05, 7.8133e-05, 8.3700e-05, 7.5482e-05, 8.5315e-05], device='cuda:6') 2023-03-27 05:47:34,604 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-27 05:47:36,859 INFO [finetune.py:976] (6/7) Epoch 24, batch 5550, loss[loss=0.1444, simple_loss=0.2215, pruned_loss=0.0337, over 4747.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2417, pruned_loss=0.0499, over 953463.81 frames. ], batch size: 23, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:47:44,087 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9241, 2.8343, 2.6599, 3.3630, 2.9307, 2.7203, 3.4178, 3.0637], device='cuda:6'), covar=tensor([0.1266, 0.1928, 0.2565, 0.1964, 0.2232, 0.1539, 0.2205, 0.1530], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0254, 0.0250, 0.0206, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:47:47,666 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=137299.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:48:14,913 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5917, 1.6560, 1.7568, 0.9378, 1.8442, 2.1229, 2.0643, 1.5558], device='cuda:6'), covar=tensor([0.0912, 0.0696, 0.0563, 0.0581, 0.0501, 0.0484, 0.0336, 0.0746], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0128, 0.0122, 0.0132, 0.0131, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.9924e-05, 1.0787e-04, 9.1458e-05, 8.6144e-05, 9.2513e-05, 9.2882e-05, 1.0159e-04, 1.0693e-04], device='cuda:6') 2023-03-27 05:48:20,578 INFO [finetune.py:976] (6/7) Epoch 24, batch 5600, loss[loss=0.1573, simple_loss=0.242, pruned_loss=0.03631, over 4868.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2448, pruned_loss=0.05073, over 953178.14 frames. ], batch size: 34, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:48:26,394 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=137347.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:48:32,125 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-27 05:48:37,702 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.121e+02 1.584e+02 1.843e+02 2.259e+02 3.753e+02, threshold=3.686e+02, percent-clipped=0.0 2023-03-27 05:48:51,115 INFO [finetune.py:976] (6/7) Epoch 24, batch 5650, loss[loss=0.1935, simple_loss=0.2604, pruned_loss=0.06327, over 4932.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2466, pruned_loss=0.05089, over 952612.76 frames. ], batch size: 33, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:48:55,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 05:49:20,947 INFO [finetune.py:976] (6/7) Epoch 24, batch 5700, loss[loss=0.1247, simple_loss=0.1867, pruned_loss=0.03137, over 3944.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2441, pruned_loss=0.0505, over 937240.79 frames. ], batch size: 17, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:49:31,092 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7427, 2.6935, 2.5224, 3.0169, 3.5477, 2.8223, 3.1542, 2.2915], device='cuda:6'), covar=tensor([0.1912, 0.1691, 0.1591, 0.1284, 0.1261, 0.0905, 0.1453, 0.1697], device='cuda:6'), in_proj_covar=tensor([0.0242, 0.0210, 0.0213, 0.0195, 0.0243, 0.0189, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:49:35,756 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=137462.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:49:52,033 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.393e+01 1.414e+02 1.686e+02 2.138e+02 3.465e+02, threshold=3.373e+02, percent-clipped=0.0 2023-03-27 05:49:52,049 INFO [finetune.py:976] (6/7) Epoch 25, batch 0, loss[loss=0.1796, simple_loss=0.2571, pruned_loss=0.051, over 4833.00 frames. ], tot_loss[loss=0.1796, simple_loss=0.2571, pruned_loss=0.051, over 4833.00 frames. ], batch size: 47, lr: 3.03e-03, grad_scale: 32.0 2023-03-27 05:49:52,049 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 05:49:57,889 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6362, 3.3676, 3.2012, 1.6740, 3.3949, 2.6782, 0.9857, 2.4959], device='cuda:6'), covar=tensor([0.1774, 0.1482, 0.1516, 0.2776, 0.1074, 0.0933, 0.3150, 0.1259], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0177, 0.0160, 0.0129, 0.0161, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 05:50:06,682 INFO [finetune.py:1010] (6/7) Epoch 25, validation: loss=0.1587, simple_loss=0.2267, pruned_loss=0.04536, over 2265189.00 frames. 2023-03-27 05:50:06,682 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 05:50:46,520 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=137510.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:50:50,026 INFO [finetune.py:976] (6/7) Epoch 25, batch 50, loss[loss=0.1815, simple_loss=0.2391, pruned_loss=0.06197, over 4878.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2432, pruned_loss=0.04795, over 216682.12 frames. ], batch size: 35, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:50:52,336 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7994, 1.7143, 1.7193, 1.7423, 1.1700, 3.3688, 1.2852, 1.7229], device='cuda:6'), covar=tensor([0.2941, 0.2317, 0.1894, 0.2153, 0.1641, 0.0221, 0.2387, 0.1200], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:50:54,368 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-27 05:51:06,079 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5318, 1.3467, 1.8885, 1.8306, 1.4952, 3.3170, 1.1875, 1.4299], device='cuda:6'), covar=tensor([0.1129, 0.2340, 0.1413, 0.1065, 0.1923, 0.0291, 0.2041, 0.2340], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0081, 0.0073, 0.0075, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 05:51:09,927 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9891, 1.7255, 2.4107, 3.8618, 2.5577, 2.7880, 0.9152, 3.1476], device='cuda:6'), covar=tensor([0.1805, 0.1641, 0.1527, 0.0804, 0.0881, 0.1722, 0.2217, 0.0482], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0163, 0.0101, 0.0136, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 05:51:12,271 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6056, 1.6510, 1.6378, 0.8918, 1.7735, 1.9653, 1.9889, 1.4740], device='cuda:6'), covar=tensor([0.1125, 0.0778, 0.0631, 0.0679, 0.0595, 0.0712, 0.0382, 0.0813], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0150, 0.0127, 0.0123, 0.0131, 0.0130, 0.0142, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9609e-05, 1.0756e-04, 9.0843e-05, 8.6264e-05, 9.2124e-05, 9.2654e-05, 1.0144e-04, 1.0627e-04], device='cuda:6') 2023-03-27 05:51:25,239 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.123e+02 1.580e+02 1.851e+02 2.170e+02 4.183e+02, threshold=3.702e+02, percent-clipped=2.0 2023-03-27 05:51:25,255 INFO [finetune.py:976] (6/7) Epoch 25, batch 100, loss[loss=0.2069, simple_loss=0.2604, pruned_loss=0.07672, over 4905.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2393, pruned_loss=0.04904, over 382134.14 frames. ], batch size: 46, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:51:59,264 INFO [finetune.py:976] (6/7) Epoch 25, batch 150, loss[loss=0.1287, simple_loss=0.2092, pruned_loss=0.02416, over 4751.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2357, pruned_loss=0.04828, over 509290.24 frames. ], batch size: 28, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:52:09,246 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 05:52:33,558 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.113e+02 1.544e+02 1.791e+02 2.141e+02 4.771e+02, threshold=3.582e+02, percent-clipped=2.0 2023-03-27 05:52:33,574 INFO [finetune.py:976] (6/7) Epoch 25, batch 200, loss[loss=0.1832, simple_loss=0.2553, pruned_loss=0.05557, over 4903.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2357, pruned_loss=0.04825, over 607792.58 frames. ], batch size: 43, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:52:49,683 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 05:53:26,999 INFO [finetune.py:976] (6/7) Epoch 25, batch 250, loss[loss=0.1675, simple_loss=0.2507, pruned_loss=0.04216, over 4753.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2423, pruned_loss=0.05103, over 684678.96 frames. ], batch size: 54, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:53:31,860 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8655, 2.7916, 2.4866, 3.1104, 2.6420, 2.6824, 2.6133, 3.5788], device='cuda:6'), covar=tensor([0.3377, 0.4307, 0.3132, 0.3301, 0.3995, 0.2326, 0.3911, 0.1462], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0265, 0.0236, 0.0277, 0.0260, 0.0229, 0.0257, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:54:00,392 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.518e+01 1.603e+02 1.998e+02 2.287e+02 4.515e+02, threshold=3.995e+02, percent-clipped=2.0 2023-03-27 05:54:00,407 INFO [finetune.py:976] (6/7) Epoch 25, batch 300, loss[loss=0.1823, simple_loss=0.2558, pruned_loss=0.05442, over 4735.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2456, pruned_loss=0.05143, over 745823.95 frames. ], batch size: 59, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:54:01,163 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1360, 1.8267, 2.1721, 2.2244, 1.9127, 1.9337, 2.1776, 2.0343], device='cuda:6'), covar=tensor([0.4540, 0.4222, 0.3229, 0.4037, 0.5142, 0.4105, 0.4728, 0.3098], device='cuda:6'), in_proj_covar=tensor([0.0262, 0.0246, 0.0266, 0.0291, 0.0291, 0.0267, 0.0297, 0.0249], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 05:54:33,830 INFO [finetune.py:976] (6/7) Epoch 25, batch 350, loss[loss=0.2068, simple_loss=0.2826, pruned_loss=0.06551, over 4866.00 frames. ], tot_loss[loss=0.1765, simple_loss=0.2484, pruned_loss=0.05228, over 792869.04 frames. ], batch size: 34, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:55:00,348 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-27 05:55:07,125 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.541e+02 1.822e+02 2.129e+02 2.910e+02, threshold=3.644e+02, percent-clipped=0.0 2023-03-27 05:55:07,141 INFO [finetune.py:976] (6/7) Epoch 25, batch 400, loss[loss=0.1758, simple_loss=0.2477, pruned_loss=0.05201, over 4912.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2472, pruned_loss=0.0513, over 827600.16 frames. ], batch size: 37, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:55:31,423 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=137892.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:55:33,189 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=137895.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:55:53,569 INFO [finetune.py:976] (6/7) Epoch 25, batch 450, loss[loss=0.1905, simple_loss=0.259, pruned_loss=0.06103, over 4925.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2459, pruned_loss=0.0508, over 855641.75 frames. ], batch size: 38, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:55:54,828 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9807, 1.3932, 0.7006, 1.8517, 2.3192, 1.7924, 1.6264, 1.8226], device='cuda:6'), covar=tensor([0.1326, 0.1878, 0.1967, 0.1006, 0.1782, 0.1830, 0.1258, 0.1724], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0118, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 05:56:19,226 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=137953.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:56:20,977 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=137956.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:56:26,866 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.118e+02 1.489e+02 1.801e+02 2.267e+02 5.324e+02, threshold=3.602e+02, percent-clipped=3.0 2023-03-27 05:56:26,882 INFO [finetune.py:976] (6/7) Epoch 25, batch 500, loss[loss=0.1915, simple_loss=0.2655, pruned_loss=0.05873, over 4900.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2438, pruned_loss=0.05037, over 879529.97 frames. ], batch size: 32, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:56:57,244 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 05:56:58,230 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5916, 1.6859, 1.3834, 1.6505, 1.9864, 1.8539, 1.6594, 1.4716], device='cuda:6'), covar=tensor([0.0347, 0.0318, 0.0586, 0.0319, 0.0200, 0.0552, 0.0272, 0.0446], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0145, 0.0112, 0.0101, 0.0114, 0.0102, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7783e-05, 8.1419e-05, 1.1367e-04, 8.5842e-05, 7.8596e-05, 8.3952e-05, 7.5641e-05, 8.5857e-05], device='cuda:6') 2023-03-27 05:57:01,547 INFO [finetune.py:976] (6/7) Epoch 25, batch 550, loss[loss=0.173, simple_loss=0.2398, pruned_loss=0.05308, over 4779.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2405, pruned_loss=0.04938, over 896364.68 frames. ], batch size: 28, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:57:34,656 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.805e+01 1.427e+02 1.740e+02 1.994e+02 3.808e+02, threshold=3.480e+02, percent-clipped=1.0 2023-03-27 05:57:34,672 INFO [finetune.py:976] (6/7) Epoch 25, batch 600, loss[loss=0.2184, simple_loss=0.2863, pruned_loss=0.07522, over 4764.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2424, pruned_loss=0.05023, over 912270.15 frames. ], batch size: 54, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:58:07,364 INFO [finetune.py:976] (6/7) Epoch 25, batch 650, loss[loss=0.1611, simple_loss=0.2503, pruned_loss=0.03592, over 4917.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2449, pruned_loss=0.05082, over 922196.67 frames. ], batch size: 38, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:58:34,867 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4913, 1.5376, 1.6511, 0.8812, 1.7232, 1.9640, 1.9330, 1.4158], device='cuda:6'), covar=tensor([0.0797, 0.0656, 0.0516, 0.0529, 0.0439, 0.0599, 0.0303, 0.0689], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0122, 0.0131, 0.0129, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9281e-05, 1.0735e-04, 9.0633e-05, 8.5889e-05, 9.1829e-05, 9.2007e-05, 1.0089e-04, 1.0580e-04], device='cuda:6') 2023-03-27 05:58:59,109 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.613e+02 1.882e+02 2.325e+02 5.163e+02, threshold=3.765e+02, percent-clipped=4.0 2023-03-27 05:58:59,125 INFO [finetune.py:976] (6/7) Epoch 25, batch 700, loss[loss=0.2069, simple_loss=0.2919, pruned_loss=0.06098, over 4807.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2457, pruned_loss=0.05052, over 930073.22 frames. ], batch size: 40, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:59:26,294 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 05:59:32,528 INFO [finetune.py:976] (6/7) Epoch 25, batch 750, loss[loss=0.1475, simple_loss=0.2329, pruned_loss=0.03106, over 4818.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2467, pruned_loss=0.0509, over 934154.77 frames. ], batch size: 39, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 05:59:53,531 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138248.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 05:59:55,845 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138251.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:00:05,193 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.386e+01 1.513e+02 1.803e+02 2.270e+02 6.862e+02, threshold=3.605e+02, percent-clipped=3.0 2023-03-27 06:00:05,209 INFO [finetune.py:976] (6/7) Epoch 25, batch 800, loss[loss=0.1672, simple_loss=0.2412, pruned_loss=0.04654, over 4789.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2458, pruned_loss=0.05023, over 939264.17 frames. ], batch size: 29, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:00:08,356 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138270.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:00:19,842 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.57 vs. limit=2.0 2023-03-27 06:00:38,349 INFO [finetune.py:976] (6/7) Epoch 25, batch 850, loss[loss=0.1466, simple_loss=0.2168, pruned_loss=0.03823, over 4911.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2443, pruned_loss=0.05053, over 941763.65 frames. ], batch size: 36, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:00:44,520 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138322.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:00:45,809 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2342, 2.1391, 1.9274, 2.0781, 2.0385, 2.0687, 2.1096, 2.7869], device='cuda:6'), covar=tensor([0.3769, 0.4248, 0.3138, 0.3539, 0.3899, 0.2461, 0.3803, 0.1614], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0264, 0.0235, 0.0276, 0.0259, 0.0229, 0.0257, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:00:54,537 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138331.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:01:24,849 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.388e+02 1.706e+02 2.091e+02 3.563e+02, threshold=3.412e+02, percent-clipped=0.0 2023-03-27 06:01:24,865 INFO [finetune.py:976] (6/7) Epoch 25, batch 900, loss[loss=0.1823, simple_loss=0.2345, pruned_loss=0.065, over 4803.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2408, pruned_loss=0.04895, over 947240.27 frames. ], batch size: 45, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:01:35,156 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8530, 1.8399, 1.7596, 1.8414, 1.4057, 3.2869, 1.5393, 1.9768], device='cuda:6'), covar=tensor([0.2976, 0.2088, 0.1869, 0.2166, 0.1563, 0.0271, 0.2260, 0.1047], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:01:36,375 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138383.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:01:55,432 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6792, 1.6361, 1.4332, 1.8851, 2.1503, 1.8592, 1.4751, 1.3536], device='cuda:6'), covar=tensor([0.2449, 0.2213, 0.2199, 0.1691, 0.1910, 0.1330, 0.2828, 0.2141], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0213, 0.0216, 0.0198, 0.0245, 0.0192, 0.0218, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:01:57,680 INFO [finetune.py:976] (6/7) Epoch 25, batch 950, loss[loss=0.1809, simple_loss=0.2412, pruned_loss=0.06028, over 4768.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2395, pruned_loss=0.04876, over 949409.18 frames. ], batch size: 26, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:02:03,938 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1431, 2.3598, 1.9259, 1.9128, 2.7115, 2.8226, 2.2430, 2.1909], device='cuda:6'), covar=tensor([0.0425, 0.0292, 0.0579, 0.0331, 0.0205, 0.0336, 0.0267, 0.0327], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0107, 0.0146, 0.0112, 0.0101, 0.0113, 0.0102, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7886e-05, 8.1737e-05, 1.1419e-04, 8.5897e-05, 7.8502e-05, 8.3867e-05, 7.5927e-05, 8.6010e-05], device='cuda:6') 2023-03-27 06:02:30,846 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.447e+02 1.781e+02 2.298e+02 4.571e+02, threshold=3.563e+02, percent-clipped=3.0 2023-03-27 06:02:30,862 INFO [finetune.py:976] (6/7) Epoch 25, batch 1000, loss[loss=0.1387, simple_loss=0.2167, pruned_loss=0.03034, over 4769.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2427, pruned_loss=0.04965, over 952801.52 frames. ], batch size: 28, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:02:35,751 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8200, 1.6526, 1.5586, 1.1728, 1.6989, 1.6929, 1.6376, 2.0799], device='cuda:6'), covar=tensor([0.3741, 0.3717, 0.3165, 0.3532, 0.3135, 0.2258, 0.3244, 0.2020], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0234, 0.0275, 0.0258, 0.0228, 0.0256, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:03:03,760 INFO [finetune.py:976] (6/7) Epoch 25, batch 1050, loss[loss=0.17, simple_loss=0.2545, pruned_loss=0.04281, over 4806.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2458, pruned_loss=0.05028, over 953166.52 frames. ], batch size: 41, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:03:10,906 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0860, 3.2856, 3.0287, 2.3018, 3.2683, 3.4860, 3.2851, 2.8747], device='cuda:6'), covar=tensor([0.0523, 0.0519, 0.0648, 0.0819, 0.0568, 0.0542, 0.0525, 0.0984], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0140, 0.0119, 0.0126, 0.0139, 0.0138, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:03:25,376 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138548.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:03:26,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9474, 1.6013, 2.2605, 1.4176, 2.0933, 2.2298, 1.5569, 2.2398], device='cuda:6'), covar=tensor([0.1115, 0.2249, 0.1142, 0.1897, 0.0765, 0.1183, 0.2810, 0.0742], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0207, 0.0192, 0.0190, 0.0174, 0.0214, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:03:27,136 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138551.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:03:39,080 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.570e+02 1.874e+02 2.177e+02 7.699e+02, threshold=3.747e+02, percent-clipped=3.0 2023-03-27 06:03:39,096 INFO [finetune.py:976] (6/7) Epoch 25, batch 1100, loss[loss=0.1355, simple_loss=0.2123, pruned_loss=0.02934, over 4781.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2471, pruned_loss=0.05067, over 954268.13 frames. ], batch size: 29, lr: 3.02e-03, grad_scale: 32.0 2023-03-27 06:04:17,874 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=138596.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:04:19,640 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=138599.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:04:29,870 INFO [finetune.py:976] (6/7) Epoch 25, batch 1150, loss[loss=0.2323, simple_loss=0.2787, pruned_loss=0.09291, over 4844.00 frames. ], tot_loss[loss=0.1743, simple_loss=0.2474, pruned_loss=0.05063, over 955244.94 frames. ], batch size: 49, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:04:39,030 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138626.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:03,421 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.832e+01 1.520e+02 1.733e+02 2.222e+02 3.582e+02, threshold=3.466e+02, percent-clipped=0.0 2023-03-27 06:05:03,437 INFO [finetune.py:976] (6/7) Epoch 25, batch 1200, loss[loss=0.1765, simple_loss=0.2459, pruned_loss=0.05357, over 4912.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2454, pruned_loss=0.04973, over 956440.06 frames. ], batch size: 37, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:05:09,255 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7217, 1.9936, 1.6762, 1.6224, 2.2960, 2.2794, 1.9992, 1.8335], device='cuda:6'), covar=tensor([0.0412, 0.0337, 0.0625, 0.0404, 0.0293, 0.0550, 0.0338, 0.0449], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0108, 0.0147, 0.0113, 0.0102, 0.0115, 0.0103, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.8744e-05, 8.2815e-05, 1.1529e-04, 8.6851e-05, 7.9405e-05, 8.4958e-05, 7.6677e-05, 8.7066e-05], device='cuda:6') 2023-03-27 06:05:13,802 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=138678.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:15,618 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138681.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:18,730 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138686.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:21,236 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.13 vs. limit=5.0 2023-03-27 06:05:25,900 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138697.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:37,223 INFO [finetune.py:976] (6/7) Epoch 25, batch 1250, loss[loss=0.1868, simple_loss=0.2558, pruned_loss=0.05889, over 4767.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2444, pruned_loss=0.0501, over 955549.60 frames. ], batch size: 26, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:05:55,589 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138742.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:05:59,124 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138747.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:06:05,724 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138758.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:06:12,322 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.434e+02 1.700e+02 2.124e+02 3.591e+02, threshold=3.400e+02, percent-clipped=1.0 2023-03-27 06:06:12,338 INFO [finetune.py:976] (6/7) Epoch 25, batch 1300, loss[loss=0.137, simple_loss=0.2162, pruned_loss=0.02897, over 4826.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2408, pruned_loss=0.04879, over 954145.49 frames. ], batch size: 39, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:06:57,378 INFO [finetune.py:976] (6/7) Epoch 25, batch 1350, loss[loss=0.1537, simple_loss=0.2298, pruned_loss=0.03884, over 4900.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.241, pruned_loss=0.0494, over 954485.90 frames. ], batch size: 36, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:07:31,274 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.573e+02 1.999e+02 2.321e+02 4.595e+02, threshold=3.999e+02, percent-clipped=3.0 2023-03-27 06:07:31,290 INFO [finetune.py:976] (6/7) Epoch 25, batch 1400, loss[loss=0.259, simple_loss=0.3063, pruned_loss=0.1058, over 4136.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2439, pruned_loss=0.05024, over 952979.44 frames. ], batch size: 65, lr: 3.02e-03, grad_scale: 64.0 2023-03-27 06:08:01,079 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=138909.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:04,579 INFO [finetune.py:976] (6/7) Epoch 25, batch 1450, loss[loss=0.1683, simple_loss=0.2471, pruned_loss=0.04477, over 4927.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2452, pruned_loss=0.05043, over 952706.20 frames. ], batch size: 33, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:08:06,995 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1783, 2.0007, 2.0333, 1.0688, 2.2980, 2.5037, 2.2085, 1.8641], device='cuda:6'), covar=tensor([0.0974, 0.0662, 0.0547, 0.0635, 0.0550, 0.0737, 0.0441, 0.0808], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0127, 0.0123, 0.0131, 0.0130, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9658e-05, 1.0716e-04, 9.0855e-05, 8.6451e-05, 9.2245e-05, 9.2454e-05, 1.0154e-04, 1.0596e-04], device='cuda:6') 2023-03-27 06:08:11,749 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138926.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:28,037 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9163, 1.9734, 1.9333, 1.3915, 2.0024, 2.1187, 2.0259, 1.6550], device='cuda:6'), covar=tensor([0.0635, 0.0651, 0.0708, 0.0850, 0.0650, 0.0675, 0.0615, 0.1159], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0120, 0.0127, 0.0140, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:08:38,070 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.768e+01 1.472e+02 1.793e+02 2.201e+02 3.947e+02, threshold=3.587e+02, percent-clipped=0.0 2023-03-27 06:08:38,086 INFO [finetune.py:976] (6/7) Epoch 25, batch 1500, loss[loss=0.1751, simple_loss=0.2349, pruned_loss=0.05764, over 4806.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2453, pruned_loss=0.05017, over 954118.38 frames. ], batch size: 25, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:08:41,656 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=138970.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:08:44,024 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=138974.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:46,505 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=138978.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:08:55,699 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0230, 1.9834, 1.7385, 2.2136, 2.6048, 2.1055, 2.0993, 1.5255], device='cuda:6'), covar=tensor([0.2463, 0.2114, 0.2096, 0.1738, 0.1914, 0.1254, 0.2257, 0.2102], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0198, 0.0245, 0.0192, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:09:23,185 INFO [finetune.py:976] (6/7) Epoch 25, batch 1550, loss[loss=0.1722, simple_loss=0.249, pruned_loss=0.04773, over 4811.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2445, pruned_loss=0.0494, over 954150.15 frames. ], batch size: 39, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:09:34,898 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139026.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:09:47,266 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139037.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:09:49,768 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-27 06:09:50,282 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139042.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:09:57,470 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139053.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:10:04,643 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7052, 1.5593, 2.0473, 3.3886, 2.4052, 2.5552, 1.0135, 2.8586], device='cuda:6'), covar=tensor([0.1694, 0.1419, 0.1378, 0.0580, 0.0739, 0.1180, 0.1804, 0.0483], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0165, 0.0102, 0.0137, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 06:10:05,107 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.522e+01 1.375e+02 1.683e+02 2.077e+02 3.862e+02, threshold=3.366e+02, percent-clipped=3.0 2023-03-27 06:10:05,123 INFO [finetune.py:976] (6/7) Epoch 25, batch 1600, loss[loss=0.1804, simple_loss=0.2457, pruned_loss=0.05753, over 4872.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2427, pruned_loss=0.04908, over 954804.07 frames. ], batch size: 34, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:10:38,952 INFO [finetune.py:976] (6/7) Epoch 25, batch 1650, loss[loss=0.1554, simple_loss=0.2166, pruned_loss=0.04706, over 4788.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2405, pruned_loss=0.04826, over 954180.56 frames. ], batch size: 29, lr: 3.01e-03, grad_scale: 64.0 2023-03-27 06:10:41,874 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 06:11:12,575 INFO [finetune.py:976] (6/7) Epoch 25, batch 1700, loss[loss=0.1783, simple_loss=0.2466, pruned_loss=0.05501, over 4912.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2394, pruned_loss=0.04807, over 953700.86 frames. ], batch size: 37, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:11:13,176 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.831e+01 1.435e+02 1.759e+02 2.193e+02 3.727e+02, threshold=3.518e+02, percent-clipped=3.0 2023-03-27 06:11:46,412 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1024, 2.0198, 1.6490, 2.2101, 2.7810, 2.1660, 2.2912, 1.5748], device='cuda:6'), covar=tensor([0.2257, 0.1998, 0.2005, 0.1583, 0.1673, 0.1203, 0.1861, 0.1887], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0199, 0.0246, 0.0192, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:11:56,399 INFO [finetune.py:976] (6/7) Epoch 25, batch 1750, loss[loss=0.1761, simple_loss=0.2491, pruned_loss=0.05148, over 4925.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2423, pruned_loss=0.04957, over 953703.30 frames. ], batch size: 33, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:12:20,710 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139238.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:12:30,555 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6266, 1.5545, 1.4865, 1.6277, 1.1137, 3.6978, 1.4705, 1.8698], device='cuda:6'), covar=tensor([0.3394, 0.2707, 0.2285, 0.2527, 0.1882, 0.0185, 0.2628, 0.1361], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:12:37,959 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 06:12:39,406 INFO [finetune.py:976] (6/7) Epoch 25, batch 1800, loss[loss=0.1809, simple_loss=0.2516, pruned_loss=0.05508, over 4910.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2448, pruned_loss=0.05007, over 952767.46 frames. ], batch size: 37, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:12:39,476 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139265.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:12:39,958 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.974e+01 1.553e+02 1.833e+02 2.131e+02 4.022e+02, threshold=3.667e+02, percent-clipped=3.0 2023-03-27 06:12:46,686 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139276.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:12:55,154 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139289.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:02,260 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139299.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:13,308 INFO [finetune.py:976] (6/7) Epoch 25, batch 1850, loss[loss=0.1643, simple_loss=0.2332, pruned_loss=0.04769, over 4926.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.245, pruned_loss=0.04999, over 952969.71 frames. ], batch size: 38, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:13:15,267 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8023, 1.7823, 1.7151, 1.7518, 1.4605, 3.3324, 1.7292, 2.1147], device='cuda:6'), covar=tensor([0.2806, 0.2152, 0.1820, 0.1976, 0.1459, 0.0268, 0.2355, 0.1034], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0123, 0.0112, 0.0096, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:13:27,856 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139337.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:27,880 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139337.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 06:13:30,862 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139342.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:36,177 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139350.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:38,901 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139353.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:13:46,547 INFO [finetune.py:976] (6/7) Epoch 25, batch 1900, loss[loss=0.1667, simple_loss=0.2414, pruned_loss=0.04603, over 4794.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2464, pruned_loss=0.05016, over 954818.26 frames. ], batch size: 25, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:13:47,141 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.007e+02 1.553e+02 1.835e+02 2.150e+02 3.557e+02, threshold=3.671e+02, percent-clipped=0.0 2023-03-27 06:13:59,669 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139385.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:14:03,135 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:14:10,507 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139401.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:14:19,885 INFO [finetune.py:976] (6/7) Epoch 25, batch 1950, loss[loss=0.1657, simple_loss=0.2342, pruned_loss=0.04854, over 4813.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2461, pruned_loss=0.05047, over 954542.21 frames. ], batch size: 25, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:14:38,477 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0800, 2.0124, 1.7876, 1.9054, 1.9049, 1.8970, 1.9807, 2.6488], device='cuda:6'), covar=tensor([0.3726, 0.4070, 0.3066, 0.3921, 0.3968, 0.2505, 0.3459, 0.1613], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0264, 0.0236, 0.0276, 0.0259, 0.0230, 0.0257, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:14:39,004 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3210, 2.8997, 2.7217, 1.2365, 2.9932, 2.1567, 0.5445, 1.9534], device='cuda:6'), covar=tensor([0.2397, 0.2614, 0.2207, 0.3795, 0.1504, 0.1272, 0.4563, 0.1727], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0178, 0.0161, 0.0130, 0.0160, 0.0123, 0.0147, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 06:15:11,616 INFO [finetune.py:976] (6/7) Epoch 25, batch 2000, loss[loss=0.1352, simple_loss=0.2115, pruned_loss=0.02943, over 4821.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2442, pruned_loss=0.05008, over 954412.96 frames. ], batch size: 38, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:15:12,707 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.253e+01 1.362e+02 1.721e+02 2.187e+02 3.038e+02, threshold=3.442e+02, percent-clipped=0.0 2023-03-27 06:15:39,534 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9345, 1.7509, 1.6377, 1.8409, 2.1689, 2.1906, 1.7687, 1.6723], device='cuda:6'), covar=tensor([0.0312, 0.0345, 0.0552, 0.0293, 0.0219, 0.0376, 0.0369, 0.0399], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0107, 0.0145, 0.0112, 0.0101, 0.0114, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7956e-05, 8.1815e-05, 1.1371e-04, 8.5629e-05, 7.8155e-05, 8.4176e-05, 7.5973e-05, 8.5568e-05], device='cuda:6') 2023-03-27 06:15:45,237 INFO [finetune.py:976] (6/7) Epoch 25, batch 2050, loss[loss=0.1916, simple_loss=0.2543, pruned_loss=0.06445, over 4845.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2407, pruned_loss=0.04906, over 954331.38 frames. ], batch size: 44, lr: 3.01e-03, grad_scale: 32.0 2023-03-27 06:16:03,125 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4091, 1.3115, 1.3055, 1.3175, 0.9686, 2.2369, 0.7993, 1.3234], device='cuda:6'), covar=tensor([0.3330, 0.2512, 0.2263, 0.2516, 0.1756, 0.0411, 0.2716, 0.1251], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:16:18,438 INFO [finetune.py:976] (6/7) Epoch 25, batch 2100, loss[loss=0.1683, simple_loss=0.232, pruned_loss=0.05228, over 4765.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2402, pruned_loss=0.04931, over 955391.65 frames. ], batch size: 26, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:16:18,536 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139565.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:16:20,123 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.352e+01 1.449e+02 1.714e+02 2.109e+02 3.824e+02, threshold=3.428e+02, percent-clipped=2.0 2023-03-27 06:16:23,221 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139571.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:30,397 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139582.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:38,030 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139594.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:50,186 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2131, 2.0835, 1.5316, 0.7134, 1.7125, 1.8362, 1.7306, 1.8818], device='cuda:6'), covar=tensor([0.0806, 0.0733, 0.1421, 0.1881, 0.1282, 0.2312, 0.2043, 0.0851], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0200, 0.0181, 0.0208, 0.0210, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:16:50,723 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139613.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:16:52,380 INFO [finetune.py:976] (6/7) Epoch 25, batch 2150, loss[loss=0.2216, simple_loss=0.2855, pruned_loss=0.07884, over 4203.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2431, pruned_loss=0.04986, over 954782.79 frames. ], batch size: 65, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:17:09,456 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139632.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 06:17:09,491 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139632.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:17:26,409 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:17:27,567 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139645.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:17:43,824 INFO [finetune.py:976] (6/7) Epoch 25, batch 2200, loss[loss=0.2302, simple_loss=0.2804, pruned_loss=0.09004, over 4828.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2462, pruned_loss=0.05054, over 956519.07 frames. ], batch size: 47, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:17:45,447 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.512e+02 1.789e+02 2.111e+02 3.462e+02, threshold=3.578e+02, percent-clipped=1.0 2023-03-27 06:17:53,233 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9059, 1.4081, 2.0298, 1.9574, 1.7233, 1.6775, 1.8970, 1.9018], device='cuda:6'), covar=tensor([0.4086, 0.3874, 0.3075, 0.3587, 0.4751, 0.3921, 0.4242, 0.2877], device='cuda:6'), in_proj_covar=tensor([0.0262, 0.0245, 0.0265, 0.0290, 0.0291, 0.0267, 0.0296, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:18:17,067 INFO [finetune.py:976] (6/7) Epoch 25, batch 2250, loss[loss=0.1765, simple_loss=0.2533, pruned_loss=0.04986, over 4749.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2477, pruned_loss=0.05171, over 955609.85 frames. ], batch size: 54, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:18:18,927 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-27 06:18:50,822 INFO [finetune.py:976] (6/7) Epoch 25, batch 2300, loss[loss=0.1653, simple_loss=0.2313, pruned_loss=0.04966, over 4814.00 frames. ], tot_loss[loss=0.1752, simple_loss=0.2477, pruned_loss=0.05129, over 956934.40 frames. ], batch size: 38, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:18:52,011 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.533e+02 1.822e+02 2.118e+02 3.916e+02, threshold=3.645e+02, percent-clipped=1.0 2023-03-27 06:18:53,820 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139769.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:06,799 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4809, 2.3716, 1.9677, 0.9128, 2.0174, 1.9718, 1.8615, 2.1621], device='cuda:6'), covar=tensor([0.0955, 0.0736, 0.1620, 0.2002, 0.1479, 0.2219, 0.1981, 0.0995], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0190, 0.0199, 0.0180, 0.0208, 0.0209, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:19:06,801 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139789.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:16,878 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139804.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:18,109 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139806.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:22,850 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4590, 1.4730, 1.5583, 0.8247, 1.6288, 1.7468, 1.7949, 1.4180], device='cuda:6'), covar=tensor([0.0877, 0.0712, 0.0589, 0.0540, 0.0497, 0.0685, 0.0378, 0.0756], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0127, 0.0122, 0.0130, 0.0129, 0.0141, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.9073e-05, 1.0624e-04, 9.0937e-05, 8.5974e-05, 9.0724e-05, 9.1622e-05, 1.0037e-04, 1.0503e-04], device='cuda:6') 2023-03-27 06:19:23,938 INFO [finetune.py:976] (6/7) Epoch 25, batch 2350, loss[loss=0.1799, simple_loss=0.2235, pruned_loss=0.06817, over 4696.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.2449, pruned_loss=0.05031, over 957416.75 frames. ], batch size: 23, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:19:35,062 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:54,893 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=139850.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:54,903 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139850.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:19:55,086 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 06:20:07,504 INFO [finetune.py:976] (6/7) Epoch 25, batch 2400, loss[loss=0.1422, simple_loss=0.2103, pruned_loss=0.03708, over 4902.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2422, pruned_loss=0.05003, over 955130.27 frames. ], batch size: 36, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:20:09,535 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139865.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:10,612 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.173e+01 1.419e+02 1.768e+02 2.081e+02 3.267e+02, threshold=3.536e+02, percent-clipped=0.0 2023-03-27 06:20:10,747 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139867.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:30,520 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1860, 1.7770, 2.2453, 2.2280, 1.9352, 1.9181, 2.2195, 2.1215], device='cuda:6'), covar=tensor([0.4216, 0.4581, 0.3120, 0.4044, 0.5092, 0.3988, 0.4848, 0.2997], device='cuda:6'), in_proj_covar=tensor([0.0262, 0.0246, 0.0265, 0.0291, 0.0291, 0.0268, 0.0297, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:20:35,971 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139894.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:47,286 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=139911.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:20:50,037 INFO [finetune.py:976] (6/7) Epoch 25, batch 2450, loss[loss=0.1103, simple_loss=0.1885, pruned_loss=0.01602, over 4765.00 frames. ], tot_loss[loss=0.168, simple_loss=0.239, pruned_loss=0.04854, over 954128.04 frames. ], batch size: 28, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:20:57,354 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139927.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:01,389 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139932.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 06:21:05,446 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=139938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:08,396 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139942.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:10,194 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=139945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:22,672 INFO [finetune.py:976] (6/7) Epoch 25, batch 2500, loss[loss=0.1799, simple_loss=0.243, pruned_loss=0.05842, over 4749.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2412, pruned_loss=0.04983, over 953033.42 frames. ], batch size: 23, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:21:24,360 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.523e+02 1.884e+02 2.422e+02 3.755e+02, threshold=3.768e+02, percent-clipped=3.0 2023-03-27 06:21:32,841 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139980.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:42,132 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=139993.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:21:57,548 INFO [finetune.py:976] (6/7) Epoch 25, batch 2550, loss[loss=0.2528, simple_loss=0.3226, pruned_loss=0.09151, over 4835.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.244, pruned_loss=0.05059, over 951640.72 frames. ], batch size: 47, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:22:23,215 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8627, 1.6655, 2.3779, 1.4948, 2.0465, 2.1620, 1.5287, 2.3035], device='cuda:6'), covar=tensor([0.1440, 0.2148, 0.1315, 0.2004, 0.0908, 0.1499, 0.2914, 0.0944], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0192, 0.0191, 0.0176, 0.0214, 0.0216, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:22:36,080 INFO [finetune.py:976] (6/7) Epoch 25, batch 2600, loss[loss=0.1605, simple_loss=0.2248, pruned_loss=0.04806, over 4906.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2446, pruned_loss=0.0506, over 952372.18 frames. ], batch size: 37, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:22:42,048 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.019e+02 1.547e+02 1.969e+02 2.320e+02 4.703e+02, threshold=3.938e+02, percent-clipped=1.0 2023-03-27 06:23:14,140 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140103.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:19,333 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140110.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:22,260 INFO [finetune.py:976] (6/7) Epoch 25, batch 2650, loss[loss=0.1443, simple_loss=0.2062, pruned_loss=0.04115, over 4102.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2465, pruned_loss=0.05092, over 954120.43 frames. ], batch size: 17, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:23:28,793 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140125.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:37,224 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4447, 2.3785, 1.9370, 2.6149, 2.3508, 2.0588, 2.8624, 2.4812], device='cuda:6'), covar=tensor([0.1211, 0.1973, 0.2731, 0.2379, 0.2417, 0.1509, 0.2726, 0.1612], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0254, 0.0251, 0.0206, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:23:40,090 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.52 vs. limit=5.0 2023-03-27 06:23:41,816 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:51,791 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140160.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:53,482 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140162.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:53,559 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0059, 1.4947, 2.0499, 2.0000, 1.8037, 1.7326, 1.9290, 1.9515], device='cuda:6'), covar=tensor([0.3399, 0.3382, 0.2641, 0.3286, 0.3810, 0.3369, 0.3541, 0.2463], device='cuda:6'), in_proj_covar=tensor([0.0262, 0.0245, 0.0265, 0.0290, 0.0290, 0.0267, 0.0296, 0.0247], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:23:55,130 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140164.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:23:55,634 INFO [finetune.py:976] (6/7) Epoch 25, batch 2700, loss[loss=0.1507, simple_loss=0.2208, pruned_loss=0.04031, over 4828.00 frames. ], tot_loss[loss=0.1735, simple_loss=0.2459, pruned_loss=0.05052, over 953678.69 frames. ], batch size: 49, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:23:56,851 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.476e+02 1.708e+02 2.136e+02 4.297e+02, threshold=3.417e+02, percent-clipped=1.0 2023-03-27 06:23:59,449 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:01,335 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 06:24:22,851 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140206.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:28,676 INFO [finetune.py:976] (6/7) Epoch 25, batch 2750, loss[loss=0.1758, simple_loss=0.2453, pruned_loss=0.05311, over 4913.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2434, pruned_loss=0.04983, over 956367.70 frames. ], batch size: 36, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:24:29,638 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.67 vs. limit=5.0 2023-03-27 06:24:36,545 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140227.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:43,688 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140238.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:24:50,146 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9655, 2.8736, 2.5798, 3.0475, 2.7832, 2.8677, 2.7517, 3.6693], device='cuda:6'), covar=tensor([0.3113, 0.3953, 0.2997, 0.3128, 0.3422, 0.2278, 0.3314, 0.1451], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0235, 0.0276, 0.0259, 0.0229, 0.0257, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:24:58,272 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140259.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:25:01,787 INFO [finetune.py:976] (6/7) Epoch 25, batch 2800, loss[loss=0.1759, simple_loss=0.2451, pruned_loss=0.05336, over 4300.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2399, pruned_loss=0.04848, over 955457.45 frames. ], batch size: 65, lr: 3.01e-03, grad_scale: 16.0 2023-03-27 06:25:02,941 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.021e+02 1.479e+02 1.751e+02 2.221e+02 3.486e+02, threshold=3.502e+02, percent-clipped=1.0 2023-03-27 06:25:10,749 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140275.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:25:22,267 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:25:53,837 INFO [finetune.py:976] (6/7) Epoch 25, batch 2850, loss[loss=0.2021, simple_loss=0.2811, pruned_loss=0.06158, over 4864.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2391, pruned_loss=0.0482, over 955693.98 frames. ], batch size: 49, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:25:55,179 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7227, 1.6382, 1.4271, 1.8409, 2.3106, 1.8749, 1.6406, 1.3706], device='cuda:6'), covar=tensor([0.2169, 0.2016, 0.1963, 0.1650, 0.1655, 0.1220, 0.2346, 0.1954], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0214, 0.0198, 0.0245, 0.0190, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:25:56,966 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140320.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:26:03,413 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1425, 1.3490, 1.4166, 1.3313, 1.5036, 2.4688, 1.2462, 1.4700], device='cuda:6'), covar=tensor([0.1096, 0.1902, 0.1027, 0.0983, 0.1702, 0.0411, 0.1622, 0.1953], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:26:11,105 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9007, 2.1023, 1.7417, 1.8021, 2.4227, 2.4620, 2.0157, 1.9891], device='cuda:6'), covar=tensor([0.0391, 0.0333, 0.0610, 0.0331, 0.0250, 0.0483, 0.0420, 0.0386], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0107, 0.0145, 0.0111, 0.0100, 0.0114, 0.0103, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7949e-05, 8.1557e-05, 1.1344e-04, 8.4996e-05, 7.7683e-05, 8.4320e-05, 7.6142e-05, 8.5273e-05], device='cuda:6') 2023-03-27 06:26:27,557 INFO [finetune.py:976] (6/7) Epoch 25, batch 2900, loss[loss=0.1366, simple_loss=0.2214, pruned_loss=0.02585, over 4759.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2429, pruned_loss=0.04962, over 954116.48 frames. ], batch size: 28, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:26:28,757 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.105e+02 1.583e+02 1.866e+02 2.190e+02 4.311e+02, threshold=3.732e+02, percent-clipped=1.0 2023-03-27 06:26:41,969 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140387.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:01,453 INFO [finetune.py:976] (6/7) Epoch 25, batch 2950, loss[loss=0.207, simple_loss=0.2843, pruned_loss=0.06488, over 4848.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2452, pruned_loss=0.0502, over 954098.92 frames. ], batch size: 47, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:27:06,957 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4911, 1.4571, 1.4573, 1.5055, 0.9804, 2.8850, 1.0185, 1.4883], device='cuda:6'), covar=tensor([0.3152, 0.2454, 0.1981, 0.2253, 0.1717, 0.0254, 0.2645, 0.1216], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0123, 0.0113, 0.0096, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:27:08,064 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140425.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:21,224 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:23,056 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140448.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:30,138 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140459.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:30,780 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140460.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:32,482 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140462.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:34,686 INFO [finetune.py:976] (6/7) Epoch 25, batch 3000, loss[loss=0.1884, simple_loss=0.2495, pruned_loss=0.06368, over 4923.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2468, pruned_loss=0.05123, over 954965.61 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:27:34,686 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 06:27:48,789 INFO [finetune.py:1010] (6/7) Epoch 25, validation: loss=0.1571, simple_loss=0.2254, pruned_loss=0.04443, over 2265189.00 frames. 2023-03-27 06:27:48,789 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 06:27:49,983 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140466.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:27:50,500 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.568e+02 1.888e+02 2.214e+02 4.503e+02, threshold=3.776e+02, percent-clipped=3.0 2023-03-27 06:27:59,536 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140473.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:17,970 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140489.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:20,352 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:29,177 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:29,205 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5004, 2.3824, 1.9054, 0.9782, 2.0140, 1.9363, 1.7849, 2.1042], device='cuda:6'), covar=tensor([0.0976, 0.0805, 0.1650, 0.2069, 0.1558, 0.2155, 0.2239, 0.1049], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0201, 0.0181, 0.0210, 0.0211, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:28:30,344 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140508.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:30,997 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140509.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:31,587 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140510.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:28:31,720 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.46 vs. limit=2.0 2023-03-27 06:28:34,544 INFO [finetune.py:976] (6/7) Epoch 25, batch 3050, loss[loss=0.1544, simple_loss=0.2364, pruned_loss=0.03621, over 4927.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.248, pruned_loss=0.0516, over 953769.67 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:28:58,618 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140550.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:00,944 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140554.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:08,055 INFO [finetune.py:976] (6/7) Epoch 25, batch 3100, loss[loss=0.1541, simple_loss=0.2171, pruned_loss=0.04558, over 4163.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2451, pruned_loss=0.05045, over 953971.53 frames. ], batch size: 18, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:29:09,242 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.313e+01 1.495e+02 1.767e+02 2.180e+02 4.499e+02, threshold=3.535e+02, percent-clipped=1.0 2023-03-27 06:29:12,198 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140570.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:42,053 INFO [finetune.py:976] (6/7) Epoch 25, batch 3150, loss[loss=0.1425, simple_loss=0.2236, pruned_loss=0.03073, over 4897.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2431, pruned_loss=0.05014, over 954174.22 frames. ], batch size: 32, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:29:42,123 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140615.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:29:45,857 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3355, 2.3842, 2.0232, 2.4762, 2.2896, 2.2527, 2.2389, 3.1891], device='cuda:6'), covar=tensor([0.3652, 0.4482, 0.3248, 0.4054, 0.4429, 0.2487, 0.4130, 0.1542], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0264, 0.0236, 0.0277, 0.0260, 0.0230, 0.0257, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:30:09,192 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1118, 3.6288, 3.7985, 3.9740, 3.8845, 3.6344, 4.1854, 1.3038], device='cuda:6'), covar=tensor([0.0855, 0.0888, 0.0923, 0.1073, 0.1362, 0.1613, 0.0854, 0.5904], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0249, 0.0284, 0.0297, 0.0339, 0.0288, 0.0309, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:30:15,050 INFO [finetune.py:976] (6/7) Epoch 25, batch 3200, loss[loss=0.1196, simple_loss=0.1975, pruned_loss=0.02086, over 4794.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2396, pruned_loss=0.04881, over 955327.74 frames. ], batch size: 29, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:30:16,222 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.929e+01 1.486e+02 1.750e+02 2.144e+02 4.466e+02, threshold=3.500e+02, percent-clipped=2.0 2023-03-27 06:30:38,953 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6141, 1.0486, 0.8687, 1.5494, 2.0104, 1.5135, 1.3627, 1.5483], device='cuda:6'), covar=tensor([0.1584, 0.2372, 0.1967, 0.1337, 0.1976, 0.2112, 0.1578, 0.2092], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 06:30:58,022 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5305, 3.3171, 3.1885, 1.3547, 3.3881, 2.5197, 0.8848, 2.2524], device='cuda:6'), covar=tensor([0.2417, 0.2118, 0.1687, 0.3479, 0.1309, 0.1010, 0.4057, 0.1615], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0179, 0.0162, 0.0130, 0.0161, 0.0124, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 06:31:06,560 INFO [finetune.py:976] (6/7) Epoch 25, batch 3250, loss[loss=0.204, simple_loss=0.279, pruned_loss=0.06447, over 4905.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2416, pruned_loss=0.04994, over 955466.91 frames. ], batch size: 43, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:31:11,487 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140723.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:25,189 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140743.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:25,237 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5023, 1.3881, 1.3104, 1.4375, 1.6762, 1.6534, 1.4226, 1.3180], device='cuda:6'), covar=tensor([0.0356, 0.0320, 0.0635, 0.0312, 0.0229, 0.0395, 0.0379, 0.0410], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0106, 0.0144, 0.0110, 0.0099, 0.0113, 0.0102, 0.0111], device='cuda:6'), out_proj_covar=tensor([7.6908e-05, 8.0904e-05, 1.1248e-04, 8.4062e-05, 7.6938e-05, 8.3148e-05, 7.5511e-05, 8.4361e-05], device='cuda:6') 2023-03-27 06:31:35,343 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140759.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:39,362 INFO [finetune.py:976] (6/7) Epoch 25, batch 3300, loss[loss=0.14, simple_loss=0.2137, pruned_loss=0.0332, over 4729.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2431, pruned_loss=0.04998, over 953163.70 frames. ], batch size: 23, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:31:40,543 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140766.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:31:41,064 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.629e+02 1.945e+02 2.397e+02 4.021e+02, threshold=3.889e+02, percent-clipped=5.0 2023-03-27 06:31:43,018 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7483, 1.2306, 0.8779, 1.5744, 2.0354, 1.2961, 1.5392, 1.6026], device='cuda:6'), covar=tensor([0.1437, 0.2049, 0.1770, 0.1157, 0.1984, 0.1805, 0.1352, 0.1893], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0118, 0.0093, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 06:31:53,076 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:02,798 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 06:32:08,086 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140807.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:09,341 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140809.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:12,394 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140814.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:12,932 INFO [finetune.py:976] (6/7) Epoch 25, batch 3350, loss[loss=0.1689, simple_loss=0.2431, pruned_loss=0.04737, over 4856.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2452, pruned_loss=0.05069, over 951279.43 frames. ], batch size: 31, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:32:33,956 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140845.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:44,306 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0998, 1.8856, 1.7155, 2.0649, 2.6606, 2.1375, 2.2160, 1.5857], device='cuda:6'), covar=tensor([0.2075, 0.1960, 0.1859, 0.1686, 0.1694, 0.1201, 0.1809, 0.1832], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0215, 0.0198, 0.0245, 0.0191, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:32:46,606 INFO [finetune.py:976] (6/7) Epoch 25, batch 3400, loss[loss=0.1995, simple_loss=0.27, pruned_loss=0.06447, over 4196.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2462, pruned_loss=0.0513, over 952570.38 frames. ], batch size: 65, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:32:46,675 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=140865.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:32:47,793 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.564e+02 1.878e+02 2.236e+02 3.278e+02, threshold=3.756e+02, percent-clipped=0.0 2023-03-27 06:32:49,700 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140870.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:33:19,366 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.0452, 3.4879, 3.6743, 3.8857, 3.8566, 3.5606, 4.1191, 1.3040], device='cuda:6'), covar=tensor([0.0795, 0.0841, 0.0878, 0.1012, 0.1179, 0.1534, 0.0792, 0.5557], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0247, 0.0282, 0.0296, 0.0337, 0.0286, 0.0307, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:33:38,072 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3925, 2.5504, 2.4602, 1.6250, 2.2539, 2.6838, 2.5913, 2.2059], device='cuda:6'), covar=tensor([0.0619, 0.0574, 0.0748, 0.0960, 0.1093, 0.0712, 0.0588, 0.0957], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0136, 0.0140, 0.0119, 0.0126, 0.0137, 0.0138, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:33:39,168 INFO [finetune.py:976] (6/7) Epoch 25, batch 3450, loss[loss=0.1433, simple_loss=0.2152, pruned_loss=0.03567, over 4873.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2456, pruned_loss=0.05045, over 953592.29 frames. ], batch size: 31, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:33:39,280 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=140915.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:33:40,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140917.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:01,078 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4371, 1.3778, 1.3432, 1.3752, 0.8685, 2.2922, 0.7698, 1.2833], device='cuda:6'), covar=tensor([0.3327, 0.2547, 0.2160, 0.2394, 0.1810, 0.0402, 0.2676, 0.1327], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0123, 0.0112, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:34:01,684 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=140947.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:11,813 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=140963.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:12,941 INFO [finetune.py:976] (6/7) Epoch 25, batch 3500, loss[loss=0.1503, simple_loss=0.2145, pruned_loss=0.04301, over 4810.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2439, pruned_loss=0.05017, over 953617.63 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:34:14,175 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.468e+02 1.748e+02 2.204e+02 3.629e+02, threshold=3.496e+02, percent-clipped=0.0 2023-03-27 06:34:17,633 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.72 vs. limit=5.0 2023-03-27 06:34:20,886 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=140978.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:41,919 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141008.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:34:46,048 INFO [finetune.py:976] (6/7) Epoch 25, batch 3550, loss[loss=0.2373, simple_loss=0.2767, pruned_loss=0.09892, over 4068.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2407, pruned_loss=0.04924, over 953293.94 frames. ], batch size: 65, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:34:47,248 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2401, 2.0838, 1.5280, 0.6226, 1.7079, 1.8965, 1.7699, 1.8559], device='cuda:6'), covar=tensor([0.0960, 0.0914, 0.1562, 0.2213, 0.1478, 0.2386, 0.2348, 0.0941], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0190, 0.0199, 0.0180, 0.0209, 0.0210, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:35:04,135 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141043.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:35:19,345 INFO [finetune.py:976] (6/7) Epoch 25, batch 3600, loss[loss=0.1745, simple_loss=0.2609, pruned_loss=0.04405, over 4910.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2382, pruned_loss=0.04849, over 953437.51 frames. ], batch size: 36, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:35:20,527 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.456e+02 1.796e+02 2.356e+02 3.995e+02, threshold=3.592e+02, percent-clipped=1.0 2023-03-27 06:35:28,424 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141079.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:35:36,622 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141091.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:00,468 INFO [finetune.py:976] (6/7) Epoch 25, batch 3650, loss[loss=0.2019, simple_loss=0.273, pruned_loss=0.06538, over 4733.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2409, pruned_loss=0.04939, over 954249.40 frames. ], batch size: 54, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:36:30,906 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141143.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:32,113 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:38,423 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6749, 1.6376, 1.5497, 0.8095, 1.7476, 1.8849, 1.8104, 1.4596], device='cuda:6'), covar=tensor([0.1042, 0.0785, 0.0509, 0.0698, 0.0473, 0.0584, 0.0433, 0.0781], device='cuda:6'), in_proj_covar=tensor([0.0123, 0.0149, 0.0128, 0.0123, 0.0131, 0.0130, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9812e-05, 1.0692e-04, 9.1200e-05, 8.6594e-05, 9.1690e-05, 9.2750e-05, 1.0102e-04, 1.0607e-04], device='cuda:6') 2023-03-27 06:36:46,081 INFO [finetune.py:976] (6/7) Epoch 25, batch 3700, loss[loss=0.1667, simple_loss=0.2408, pruned_loss=0.04632, over 4888.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2442, pruned_loss=0.05047, over 951234.84 frames. ], batch size: 32, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:36:46,156 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:46,175 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:36:47,286 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.708e+02 2.029e+02 2.382e+02 3.628e+02, threshold=4.058e+02, percent-clipped=1.0 2023-03-27 06:37:03,881 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141193.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:37:03,936 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4631, 1.3504, 1.4640, 0.8525, 1.4987, 1.4849, 1.4492, 1.2315], device='cuda:6'), covar=tensor([0.0583, 0.0766, 0.0713, 0.0901, 0.0915, 0.0697, 0.0631, 0.1301], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0137, 0.0141, 0.0120, 0.0127, 0.0139, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:37:11,712 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141204.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:37:15,103 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8335, 3.3898, 3.5537, 3.7375, 3.6214, 3.4362, 3.9295, 1.1341], device='cuda:6'), covar=tensor([0.0890, 0.0945, 0.0926, 0.0922, 0.1359, 0.1546, 0.0833, 0.5621], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0248, 0.0284, 0.0297, 0.0339, 0.0288, 0.0308, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:37:18,589 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:37:19,743 INFO [finetune.py:976] (6/7) Epoch 25, batch 3750, loss[loss=0.1772, simple_loss=0.2583, pruned_loss=0.04805, over 4921.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.247, pruned_loss=0.05182, over 952440.93 frames. ], batch size: 38, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:37:45,898 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8284, 3.3914, 3.5744, 3.7297, 3.6104, 3.3861, 3.9175, 1.1965], device='cuda:6'), covar=tensor([0.1013, 0.0949, 0.0957, 0.1147, 0.1490, 0.1800, 0.0891, 0.5856], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0248, 0.0283, 0.0296, 0.0339, 0.0288, 0.0308, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:37:52,661 INFO [finetune.py:976] (6/7) Epoch 25, batch 3800, loss[loss=0.2006, simple_loss=0.2776, pruned_loss=0.06179, over 4825.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.247, pruned_loss=0.05106, over 953959.49 frames. ], batch size: 47, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:37:54,343 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.018e+02 1.508e+02 1.827e+02 2.217e+02 6.513e+02, threshold=3.654e+02, percent-clipped=2.0 2023-03-27 06:37:58,031 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141273.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:38:19,447 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141303.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:38:32,078 INFO [finetune.py:976] (6/7) Epoch 25, batch 3850, loss[loss=0.1897, simple_loss=0.2692, pruned_loss=0.05506, over 4913.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2452, pruned_loss=0.05058, over 954641.04 frames. ], batch size: 36, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:38:49,946 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141329.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:38:50,630 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.51 vs. limit=5.0 2023-03-27 06:39:16,920 INFO [finetune.py:976] (6/7) Epoch 25, batch 3900, loss[loss=0.1409, simple_loss=0.2162, pruned_loss=0.03285, over 4859.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2433, pruned_loss=0.04993, over 955750.35 frames. ], batch size: 34, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:39:18,104 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.088e+02 1.504e+02 1.773e+02 2.110e+02 6.012e+02, threshold=3.546e+02, percent-clipped=1.0 2023-03-27 06:39:26,401 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141379.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:39:33,594 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141390.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:39:39,073 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5550, 3.5284, 3.3471, 1.6063, 3.6045, 2.8109, 0.9889, 2.5700], device='cuda:6'), covar=tensor([0.2797, 0.1905, 0.1665, 0.3448, 0.1125, 0.1014, 0.4071, 0.1494], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0178, 0.0161, 0.0130, 0.0160, 0.0123, 0.0147, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 06:39:49,666 INFO [finetune.py:976] (6/7) Epoch 25, batch 3950, loss[loss=0.1665, simple_loss=0.2476, pruned_loss=0.04268, over 4847.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2413, pruned_loss=0.04952, over 957326.57 frames. ], batch size: 47, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:39:51,513 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141417.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 06:39:58,922 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141427.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:07,834 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141441.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:23,350 INFO [finetune.py:976] (6/7) Epoch 25, batch 4000, loss[loss=0.2038, simple_loss=0.269, pruned_loss=0.06936, over 4935.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2414, pruned_loss=0.04979, over 956325.16 frames. ], batch size: 38, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:40:23,444 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141465.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:24,523 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.071e+02 1.514e+02 1.750e+02 2.113e+02 3.817e+02, threshold=3.500e+02, percent-clipped=1.0 2023-03-27 06:40:33,178 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141478.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 06:40:46,341 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141499.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:48,236 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141502.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:51,781 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9492, 1.8496, 1.7702, 1.9076, 1.3384, 3.9046, 1.6390, 2.1583], device='cuda:6'), covar=tensor([0.3032, 0.2265, 0.1907, 0.2154, 0.1653, 0.0204, 0.2374, 0.1118], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0112, 0.0095, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 06:40:55,310 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141513.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:40:56,983 INFO [finetune.py:976] (6/7) Epoch 25, batch 4050, loss[loss=0.1853, simple_loss=0.2746, pruned_loss=0.04798, over 4817.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2434, pruned_loss=0.0501, over 955635.97 frames. ], batch size: 40, lr: 3.00e-03, grad_scale: 16.0 2023-03-27 06:41:07,965 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7763, 1.6036, 2.0563, 3.6054, 2.4252, 2.5878, 0.9973, 2.8730], device='cuda:6'), covar=tensor([0.1742, 0.1343, 0.1378, 0.0495, 0.0790, 0.1279, 0.1853, 0.0503], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0133, 0.0163, 0.0101, 0.0136, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 06:41:49,015 INFO [finetune.py:976] (6/7) Epoch 25, batch 4100, loss[loss=0.138, simple_loss=0.2225, pruned_loss=0.02673, over 4808.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2467, pruned_loss=0.05144, over 953608.69 frames. ], batch size: 29, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:41:50,183 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.605e+02 1.887e+02 2.173e+02 5.231e+02, threshold=3.774e+02, percent-clipped=3.0 2023-03-27 06:41:54,378 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141573.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:14,780 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141603.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:19,811 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.87 vs. limit=5.0 2023-03-27 06:42:22,403 INFO [finetune.py:976] (6/7) Epoch 25, batch 4150, loss[loss=0.1948, simple_loss=0.2786, pruned_loss=0.05546, over 4820.00 frames. ], tot_loss[loss=0.1753, simple_loss=0.2474, pruned_loss=0.05156, over 955256.98 frames. ], batch size: 38, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:42:26,084 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141621.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:47,140 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141651.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:42:55,970 INFO [finetune.py:976] (6/7) Epoch 25, batch 4200, loss[loss=0.1896, simple_loss=0.258, pruned_loss=0.06058, over 4925.00 frames. ], tot_loss[loss=0.1746, simple_loss=0.2475, pruned_loss=0.05087, over 954824.14 frames. ], batch size: 33, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:42:57,195 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.937e+01 1.553e+02 1.832e+02 2.223e+02 5.119e+02, threshold=3.664e+02, percent-clipped=2.0 2023-03-27 06:43:09,620 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141685.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:43:29,320 INFO [finetune.py:976] (6/7) Epoch 25, batch 4250, loss[loss=0.1936, simple_loss=0.249, pruned_loss=0.0691, over 4820.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2447, pruned_loss=0.05004, over 953093.03 frames. ], batch size: 38, lr: 3.00e-03, grad_scale: 32.0 2023-03-27 06:43:59,174 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3036, 2.8907, 2.7571, 1.1973, 2.9647, 2.2040, 0.8912, 1.9491], device='cuda:6'), covar=tensor([0.2431, 0.2435, 0.1974, 0.3692, 0.1475, 0.1213, 0.3854, 0.1849], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0178, 0.0160, 0.0130, 0.0160, 0.0123, 0.0147, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 06:44:21,245 INFO [finetune.py:976] (6/7) Epoch 25, batch 4300, loss[loss=0.1552, simple_loss=0.2217, pruned_loss=0.04431, over 4741.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2429, pruned_loss=0.04994, over 954077.04 frames. ], batch size: 59, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:44:22,427 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.814e+01 1.357e+02 1.630e+02 2.025e+02 3.929e+02, threshold=3.260e+02, percent-clipped=1.0 2023-03-27 06:44:26,619 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141773.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 06:44:43,606 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=141797.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:44:44,842 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141799.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:44:55,203 INFO [finetune.py:976] (6/7) Epoch 25, batch 4350, loss[loss=0.1514, simple_loss=0.2225, pruned_loss=0.04019, over 4817.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2398, pruned_loss=0.04877, over 954661.33 frames. ], batch size: 25, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:45:09,358 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 06:45:17,472 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=141847.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:45:28,846 INFO [finetune.py:976] (6/7) Epoch 25, batch 4400, loss[loss=0.1611, simple_loss=0.242, pruned_loss=0.04009, over 4750.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2407, pruned_loss=0.04957, over 953099.46 frames. ], batch size: 27, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:45:30,033 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.038e+02 1.595e+02 1.814e+02 2.202e+02 4.275e+02, threshold=3.628e+02, percent-clipped=6.0 2023-03-27 06:45:43,814 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6429, 3.7643, 3.4633, 1.6868, 3.8114, 2.9441, 0.8179, 2.6514], device='cuda:6'), covar=tensor([0.2307, 0.2440, 0.1739, 0.3718, 0.1191, 0.0959, 0.4614, 0.1505], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0176, 0.0159, 0.0129, 0.0159, 0.0122, 0.0146, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 06:45:48,456 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8639, 2.5302, 2.3715, 1.4172, 2.5711, 1.9586, 1.8670, 2.3162], device='cuda:6'), covar=tensor([0.1185, 0.0894, 0.2031, 0.2073, 0.1572, 0.2502, 0.2414, 0.1173], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0192, 0.0200, 0.0181, 0.0210, 0.0212, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:46:01,883 INFO [finetune.py:976] (6/7) Epoch 25, batch 4450, loss[loss=0.1347, simple_loss=0.2191, pruned_loss=0.02517, over 4755.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2453, pruned_loss=0.0509, over 951985.07 frames. ], batch size: 27, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:46:02,592 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141916.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:46:10,447 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=141928.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:46:46,891 INFO [finetune.py:976] (6/7) Epoch 25, batch 4500, loss[loss=0.1785, simple_loss=0.2567, pruned_loss=0.05011, over 4893.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.247, pruned_loss=0.05143, over 952799.99 frames. ], batch size: 43, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:46:52,641 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.869e+01 1.594e+02 1.826e+02 2.236e+02 4.959e+02, threshold=3.653e+02, percent-clipped=2.0 2023-03-27 06:47:02,925 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141977.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:08,162 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=141985.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:10,675 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=141989.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:47:27,114 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2080, 2.0582, 2.2725, 1.3655, 2.1268, 2.1716, 2.1880, 1.8262], device='cuda:6'), covar=tensor([0.0541, 0.0699, 0.0677, 0.0982, 0.1124, 0.0727, 0.0613, 0.1076], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0136, 0.0140, 0.0119, 0.0126, 0.0137, 0.0138, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:47:30,042 INFO [finetune.py:976] (6/7) Epoch 25, batch 4550, loss[loss=0.1783, simple_loss=0.2461, pruned_loss=0.05528, over 4830.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2476, pruned_loss=0.05091, over 952963.62 frames. ], batch size: 49, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:47:41,389 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142033.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:48:03,343 INFO [finetune.py:976] (6/7) Epoch 25, batch 4600, loss[loss=0.1427, simple_loss=0.2259, pruned_loss=0.02979, over 4809.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2469, pruned_loss=0.05016, over 951354.02 frames. ], batch size: 40, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:48:04,587 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.716e+01 1.569e+02 1.799e+02 2.271e+02 3.318e+02, threshold=3.598e+02, percent-clipped=0.0 2023-03-27 06:48:08,720 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142073.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 06:48:15,637 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 06:48:23,761 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142097.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:48:36,591 INFO [finetune.py:976] (6/7) Epoch 25, batch 4650, loss[loss=0.1426, simple_loss=0.2147, pruned_loss=0.03526, over 4935.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.245, pruned_loss=0.04985, over 952782.24 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:48:40,334 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142121.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 06:48:57,212 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:49:14,797 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 06:49:20,005 INFO [finetune.py:976] (6/7) Epoch 25, batch 4700, loss[loss=0.1167, simple_loss=0.1946, pruned_loss=0.01938, over 4802.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2413, pruned_loss=0.04849, over 954942.12 frames. ], batch size: 25, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:49:21,184 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.163e+01 1.384e+02 1.765e+02 2.088e+02 3.764e+02, threshold=3.531e+02, percent-clipped=1.0 2023-03-27 06:49:47,858 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1055, 2.0640, 1.7626, 2.1949, 2.7341, 2.1809, 2.2442, 1.6096], device='cuda:6'), covar=tensor([0.2396, 0.2102, 0.2189, 0.1798, 0.1801, 0.1306, 0.2139, 0.2074], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0213, 0.0216, 0.0199, 0.0247, 0.0192, 0.0219, 0.0207], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:50:00,899 INFO [finetune.py:976] (6/7) Epoch 25, batch 4750, loss[loss=0.1655, simple_loss=0.2463, pruned_loss=0.04232, over 4944.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2399, pruned_loss=0.04838, over 953856.09 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:50:11,683 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1599, 3.6503, 3.8348, 4.0650, 3.9370, 3.6187, 4.2503, 1.4820], device='cuda:6'), covar=tensor([0.0798, 0.0947, 0.1000, 0.0999, 0.1281, 0.1865, 0.0787, 0.5805], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0250, 0.0284, 0.0298, 0.0339, 0.0290, 0.0307, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:50:18,424 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5887, 2.7554, 2.5660, 1.7292, 2.4242, 2.7896, 2.8963, 2.1492], device='cuda:6'), covar=tensor([0.0587, 0.0544, 0.0748, 0.0933, 0.0994, 0.0690, 0.0505, 0.1043], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0136, 0.0141, 0.0119, 0.0127, 0.0138, 0.0139, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:50:29,897 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2347, 2.1677, 1.7238, 2.1412, 2.1278, 1.8423, 2.4482, 2.2431], device='cuda:6'), covar=tensor([0.1274, 0.1969, 0.2884, 0.2595, 0.2550, 0.1631, 0.3231, 0.1592], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0256, 0.0251, 0.0207, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:50:34,333 INFO [finetune.py:976] (6/7) Epoch 25, batch 4800, loss[loss=0.1449, simple_loss=0.2185, pruned_loss=0.03569, over 4803.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2413, pruned_loss=0.04912, over 951544.46 frames. ], batch size: 25, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:50:35,546 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.947e+01 1.535e+02 1.762e+02 2.238e+02 3.446e+02, threshold=3.524e+02, percent-clipped=1.0 2023-03-27 06:50:39,634 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=142272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:50:47,484 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=142284.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:50:51,936 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.00 vs. limit=5.0 2023-03-27 06:51:07,517 INFO [finetune.py:976] (6/7) Epoch 25, batch 4850, loss[loss=0.2153, simple_loss=0.2941, pruned_loss=0.06821, over 4843.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2433, pruned_loss=0.04945, over 950485.27 frames. ], batch size: 49, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:51:39,148 INFO [finetune.py:976] (6/7) Epoch 25, batch 4900, loss[loss=0.1662, simple_loss=0.2404, pruned_loss=0.04601, over 4853.00 frames. ], tot_loss[loss=0.1741, simple_loss=0.2461, pruned_loss=0.05109, over 951798.80 frames. ], batch size: 31, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:51:40,866 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.551e+02 1.812e+02 2.135e+02 6.918e+02, threshold=3.624e+02, percent-clipped=2.0 2023-03-27 06:52:31,144 INFO [finetune.py:976] (6/7) Epoch 25, batch 4950, loss[loss=0.1716, simple_loss=0.248, pruned_loss=0.0476, over 4763.00 frames. ], tot_loss[loss=0.1749, simple_loss=0.2471, pruned_loss=0.05139, over 949638.62 frames. ], batch size: 54, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:53:03,312 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4964, 2.5258, 2.0823, 2.8074, 2.5527, 2.1941, 3.0571, 2.5793], device='cuda:6'), covar=tensor([0.1378, 0.2222, 0.2859, 0.2392, 0.2374, 0.1725, 0.2817, 0.1738], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0256, 0.0251, 0.0207, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:53:03,776 INFO [finetune.py:976] (6/7) Epoch 25, batch 5000, loss[loss=0.1895, simple_loss=0.2473, pruned_loss=0.06585, over 4929.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2459, pruned_loss=0.05106, over 952782.00 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:53:04,975 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.190e+01 1.432e+02 1.813e+02 2.155e+02 3.992e+02, threshold=3.625e+02, percent-clipped=1.0 2023-03-27 06:53:12,260 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3807, 2.1834, 2.3979, 1.6968, 2.1415, 2.4358, 2.4574, 1.7996], device='cuda:6'), covar=tensor([0.0467, 0.0650, 0.0645, 0.0853, 0.0773, 0.0617, 0.0574, 0.1131], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0137, 0.0141, 0.0120, 0.0128, 0.0139, 0.0141, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:53:29,342 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1929, 2.0316, 1.9072, 2.0758, 1.9846, 1.9852, 2.0033, 2.5892], device='cuda:6'), covar=tensor([0.2862, 0.3701, 0.2691, 0.3012, 0.3575, 0.2157, 0.3331, 0.1357], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0236, 0.0276, 0.0259, 0.0229, 0.0256, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:53:36,413 INFO [finetune.py:976] (6/7) Epoch 25, batch 5050, loss[loss=0.1523, simple_loss=0.2273, pruned_loss=0.03862, over 4764.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2437, pruned_loss=0.05043, over 951721.23 frames. ], batch size: 28, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:53:37,696 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.6099, 3.2917, 3.4154, 3.3289, 3.2357, 3.1955, 3.6924, 1.4401], device='cuda:6'), covar=tensor([0.1574, 0.2238, 0.1762, 0.2388, 0.2668, 0.2674, 0.1628, 0.8046], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0249, 0.0283, 0.0296, 0.0337, 0.0288, 0.0308, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:53:51,076 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.79 vs. limit=5.0 2023-03-27 06:54:09,846 INFO [finetune.py:976] (6/7) Epoch 25, batch 5100, loss[loss=0.1828, simple_loss=0.2426, pruned_loss=0.06151, over 4933.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.241, pruned_loss=0.0494, over 951427.32 frames. ], batch size: 33, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:54:11,045 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.212e+01 1.519e+02 1.807e+02 2.247e+02 4.075e+02, threshold=3.613e+02, percent-clipped=2.0 2023-03-27 06:54:14,218 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:54:14,841 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=142573.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:54:27,723 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=142584.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:54:59,674 INFO [finetune.py:976] (6/7) Epoch 25, batch 5150, loss[loss=0.2017, simple_loss=0.2647, pruned_loss=0.06935, over 4819.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2408, pruned_loss=0.04932, over 953037.72 frames. ], batch size: 47, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:55:01,637 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 06:55:03,302 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142620.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:55:10,595 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=142632.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:55:12,819 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=142634.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:55:20,577 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5320, 1.4774, 1.3511, 1.4414, 1.7675, 1.7065, 1.5542, 1.3353], device='cuda:6'), covar=tensor([0.0372, 0.0296, 0.0614, 0.0305, 0.0207, 0.0485, 0.0306, 0.0435], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0145, 0.0110, 0.0100, 0.0114, 0.0102, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7679e-05, 8.0980e-05, 1.1288e-04, 8.4273e-05, 7.7511e-05, 8.4414e-05, 7.5923e-05, 8.4892e-05], device='cuda:6') 2023-03-27 06:55:33,010 INFO [finetune.py:976] (6/7) Epoch 25, batch 5200, loss[loss=0.1541, simple_loss=0.2381, pruned_loss=0.03504, over 4759.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2441, pruned_loss=0.04959, over 953140.64 frames. ], batch size: 27, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:55:34,193 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.084e+02 1.563e+02 1.762e+02 2.093e+02 3.679e+02, threshold=3.523e+02, percent-clipped=1.0 2023-03-27 06:56:06,168 INFO [finetune.py:976] (6/7) Epoch 25, batch 5250, loss[loss=0.1706, simple_loss=0.2352, pruned_loss=0.05294, over 4833.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2462, pruned_loss=0.05, over 952598.10 frames. ], batch size: 47, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:56:12,148 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=142724.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:56:39,094 INFO [finetune.py:976] (6/7) Epoch 25, batch 5300, loss[loss=0.1663, simple_loss=0.2236, pruned_loss=0.0545, over 4579.00 frames. ], tot_loss[loss=0.1748, simple_loss=0.2478, pruned_loss=0.05089, over 953538.66 frames. ], batch size: 20, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:56:40,274 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.031e+02 1.558e+02 1.826e+02 2.127e+02 3.045e+02, threshold=3.651e+02, percent-clipped=0.0 2023-03-27 06:56:51,752 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=142785.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:57:19,998 INFO [finetune.py:976] (6/7) Epoch 25, batch 5350, loss[loss=0.1203, simple_loss=0.1988, pruned_loss=0.02096, over 4721.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2466, pruned_loss=0.05033, over 952191.22 frames. ], batch size: 59, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:57:41,735 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6399, 1.4961, 1.0795, 0.3090, 1.2404, 1.4911, 1.3893, 1.3496], device='cuda:6'), covar=tensor([0.1087, 0.0848, 0.1543, 0.2050, 0.1397, 0.2594, 0.2656, 0.1058], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0202, 0.0183, 0.0211, 0.0213, 0.0226, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 06:58:06,042 INFO [finetune.py:976] (6/7) Epoch 25, batch 5400, loss[loss=0.1817, simple_loss=0.2412, pruned_loss=0.06108, over 4767.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2433, pruned_loss=0.04947, over 949623.39 frames. ], batch size: 26, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:58:07,256 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.487e+02 1.682e+02 2.190e+02 4.832e+02, threshold=3.364e+02, percent-clipped=1.0 2023-03-27 06:58:38,661 INFO [finetune.py:976] (6/7) Epoch 25, batch 5450, loss[loss=0.1449, simple_loss=0.2022, pruned_loss=0.04385, over 4055.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2403, pruned_loss=0.04882, over 948976.59 frames. ], batch size: 17, lr: 2.99e-03, grad_scale: 32.0 2023-03-27 06:58:47,650 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=142929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 06:58:52,352 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6407, 1.5696, 1.6158, 0.9838, 1.8596, 2.0848, 1.8187, 1.5119], device='cuda:6'), covar=tensor([0.1141, 0.1087, 0.0708, 0.0709, 0.0478, 0.0656, 0.0502, 0.0861], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0147, 0.0127, 0.0122, 0.0130, 0.0129, 0.0142, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9069e-05, 1.0600e-04, 9.0818e-05, 8.5537e-05, 9.0888e-05, 9.1768e-05, 1.0096e-04, 1.0574e-04], device='cuda:6') 2023-03-27 06:58:56,167 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-27 06:59:07,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 06:59:11,890 INFO [finetune.py:976] (6/7) Epoch 25, batch 5500, loss[loss=0.1286, simple_loss=0.2007, pruned_loss=0.02823, over 4745.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2368, pruned_loss=0.0477, over 950094.92 frames. ], batch size: 28, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 06:59:13,716 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.198e+01 1.372e+02 1.708e+02 2.223e+02 4.314e+02, threshold=3.415e+02, percent-clipped=3.0 2023-03-27 06:59:23,797 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.89 vs. limit=2.0 2023-03-27 06:59:46,281 INFO [finetune.py:976] (6/7) Epoch 25, batch 5550, loss[loss=0.1346, simple_loss=0.2087, pruned_loss=0.03029, over 4903.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2377, pruned_loss=0.04806, over 952730.00 frames. ], batch size: 32, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 07:00:06,557 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1468, 3.6096, 3.7979, 4.0420, 3.8825, 3.6459, 4.2493, 1.2531], device='cuda:6'), covar=tensor([0.0833, 0.0942, 0.0932, 0.0964, 0.1394, 0.1636, 0.0847, 0.6437], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0248, 0.0281, 0.0296, 0.0337, 0.0288, 0.0307, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:00:06,837 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 07:00:29,924 INFO [finetune.py:976] (6/7) Epoch 25, batch 5600, loss[loss=0.1889, simple_loss=0.2569, pruned_loss=0.06044, over 4908.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2433, pruned_loss=0.04958, over 953194.56 frames. ], batch size: 36, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 07:00:30,057 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8644, 1.7107, 1.5871, 1.8633, 2.3981, 1.9809, 1.8776, 1.5429], device='cuda:6'), covar=tensor([0.2055, 0.1922, 0.1852, 0.1698, 0.1667, 0.1150, 0.2016, 0.1806], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0211, 0.0216, 0.0199, 0.0245, 0.0193, 0.0217, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:00:30,592 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2498, 1.5620, 0.7742, 2.0189, 2.3677, 1.9571, 1.7631, 2.0651], device='cuda:6'), covar=tensor([0.1471, 0.2015, 0.2070, 0.1245, 0.2008, 0.1728, 0.1408, 0.1981], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0092, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 07:00:31,669 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.665e+01 1.700e+02 1.937e+02 2.306e+02 4.675e+02, threshold=3.875e+02, percent-clipped=1.0 2023-03-27 07:00:38,729 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=143080.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:00:51,422 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4409, 1.3150, 1.6634, 1.6355, 1.5726, 3.2542, 1.3435, 1.4895], device='cuda:6'), covar=tensor([0.1105, 0.1941, 0.1181, 0.0986, 0.1702, 0.0222, 0.1532, 0.1907], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 07:00:59,194 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9034, 4.0065, 3.7256, 1.9047, 4.0811, 3.1868, 1.0768, 2.8716], device='cuda:6'), covar=tensor([0.2016, 0.2549, 0.1506, 0.3372, 0.0894, 0.0820, 0.4156, 0.1444], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0179, 0.0161, 0.0130, 0.0161, 0.0123, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 07:01:00,341 INFO [finetune.py:976] (6/7) Epoch 25, batch 5650, loss[loss=0.1695, simple_loss=0.2542, pruned_loss=0.04235, over 4807.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2457, pruned_loss=0.04949, over 955440.19 frames. ], batch size: 51, lr: 2.99e-03, grad_scale: 16.0 2023-03-27 07:01:23,399 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6153, 2.3891, 1.8091, 0.9280, 2.0699, 2.2015, 1.9164, 2.0953], device='cuda:6'), covar=tensor([0.0748, 0.0738, 0.1305, 0.1746, 0.1069, 0.2026, 0.2134, 0.0792], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0211, 0.0213, 0.0225, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:01:29,839 INFO [finetune.py:976] (6/7) Epoch 25, batch 5700, loss[loss=0.1266, simple_loss=0.195, pruned_loss=0.02911, over 3929.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2409, pruned_loss=0.04826, over 934214.86 frames. ], batch size: 17, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:01:31,568 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.213e+01 1.339e+02 1.671e+02 2.043e+02 4.216e+02, threshold=3.342e+02, percent-clipped=1.0 2023-03-27 07:01:33,698 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-27 07:01:35,820 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4260, 1.5972, 1.6692, 0.9342, 1.6965, 1.8155, 1.8598, 1.5317], device='cuda:6'), covar=tensor([0.1100, 0.0674, 0.0677, 0.0650, 0.0725, 0.0650, 0.0562, 0.0834], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0146, 0.0126, 0.0121, 0.0129, 0.0128, 0.0140, 0.0147], device='cuda:6'), out_proj_covar=tensor([8.8016e-05, 1.0488e-04, 9.0215e-05, 8.4865e-05, 8.9921e-05, 9.0713e-05, 9.9719e-05, 1.0468e-04], device='cuda:6') 2023-03-27 07:01:58,298 INFO [finetune.py:976] (6/7) Epoch 26, batch 0, loss[loss=0.1718, simple_loss=0.254, pruned_loss=0.04482, over 4894.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.254, pruned_loss=0.04482, over 4894.00 frames. ], batch size: 46, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:01:58,299 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 07:02:01,173 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8164, 1.7133, 2.0293, 1.3789, 1.6913, 2.0314, 1.6625, 2.1532], device='cuda:6'), covar=tensor([0.1142, 0.2076, 0.1245, 0.1638, 0.1028, 0.1307, 0.2677, 0.0852], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0205, 0.0190, 0.0190, 0.0173, 0.0212, 0.0215, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:02:03,200 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4447, 1.2949, 1.2481, 1.3541, 1.6099, 1.5986, 1.3323, 1.2358], device='cuda:6'), covar=tensor([0.0414, 0.0383, 0.0630, 0.0360, 0.0262, 0.0455, 0.0435, 0.0462], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0107, 0.0146, 0.0112, 0.0102, 0.0116, 0.0104, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.8500e-05, 8.1959e-05, 1.1422e-04, 8.5531e-05, 7.8734e-05, 8.5958e-05, 7.7183e-05, 8.6065e-05], device='cuda:6') 2023-03-27 07:02:06,200 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1554, 1.9652, 1.8324, 1.7919, 1.9548, 1.9989, 1.9765, 2.6212], device='cuda:6'), covar=tensor([0.3858, 0.4753, 0.3533, 0.3890, 0.3961, 0.2736, 0.3754, 0.1711], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0265, 0.0237, 0.0278, 0.0260, 0.0230, 0.0257, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:02:14,280 INFO [finetune.py:1010] (6/7) Epoch 26, validation: loss=0.1591, simple_loss=0.2269, pruned_loss=0.04565, over 2265189.00 frames. 2023-03-27 07:02:14,281 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 07:02:43,899 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=143229.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:03:00,607 INFO [finetune.py:976] (6/7) Epoch 26, batch 50, loss[loss=0.156, simple_loss=0.236, pruned_loss=0.03796, over 4776.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.247, pruned_loss=0.05116, over 215338.47 frames. ], batch size: 51, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:03:13,062 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2459, 2.0167, 2.0414, 0.9453, 2.4857, 2.5897, 2.2261, 1.9834], device='cuda:6'), covar=tensor([0.0961, 0.0704, 0.0611, 0.0717, 0.0531, 0.0680, 0.0462, 0.0773], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0147, 0.0128, 0.0122, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.8943e-05, 1.0601e-04, 9.1068e-05, 8.5568e-05, 9.0961e-05, 9.1380e-05, 1.0051e-04, 1.0563e-04], device='cuda:6') 2023-03-27 07:03:18,454 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.036e+02 1.460e+02 1.766e+02 2.058e+02 4.416e+02, threshold=3.532e+02, percent-clipped=3.0 2023-03-27 07:03:19,190 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8970, 1.7449, 1.4905, 1.4407, 1.8797, 1.6306, 1.8277, 1.8457], device='cuda:6'), covar=tensor([0.1351, 0.1960, 0.3039, 0.2399, 0.2624, 0.1737, 0.2508, 0.1743], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0255, 0.0251, 0.0207, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:03:24,476 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=143277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:03:30,582 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7475, 2.6582, 2.3272, 2.9638, 2.7100, 2.3899, 3.1515, 2.8483], device='cuda:6'), covar=tensor([0.1310, 0.2243, 0.2782, 0.2444, 0.2508, 0.1769, 0.2634, 0.1729], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0191, 0.0237, 0.0255, 0.0252, 0.0208, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:03:34,112 INFO [finetune.py:976] (6/7) Epoch 26, batch 100, loss[loss=0.1695, simple_loss=0.2454, pruned_loss=0.04684, over 4735.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.241, pruned_loss=0.04838, over 379856.59 frames. ], batch size: 54, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:03:34,192 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7531, 4.0008, 3.7667, 2.0050, 4.1035, 3.1100, 0.7986, 2.8815], device='cuda:6'), covar=tensor([0.2277, 0.1779, 0.1790, 0.3354, 0.1046, 0.0986, 0.4683, 0.1494], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0178, 0.0161, 0.0130, 0.0160, 0.0123, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 07:04:07,507 INFO [finetune.py:976] (6/7) Epoch 26, batch 150, loss[loss=0.1632, simple_loss=0.2258, pruned_loss=0.05031, over 4833.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2385, pruned_loss=0.04949, over 509039.14 frames. ], batch size: 33, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:04:15,753 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.65 vs. limit=5.0 2023-03-27 07:04:25,693 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.744e+01 1.335e+02 1.679e+02 2.114e+02 2.886e+02, threshold=3.358e+02, percent-clipped=0.0 2023-03-27 07:04:33,624 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=143380.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:04:41,260 INFO [finetune.py:976] (6/7) Epoch 26, batch 200, loss[loss=0.1863, simple_loss=0.252, pruned_loss=0.0603, over 4819.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2359, pruned_loss=0.0487, over 608741.12 frames. ], batch size: 39, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:04:46,840 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=143400.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:05:05,300 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=143428.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:05:05,381 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3848, 2.3177, 1.9136, 2.4640, 2.3658, 2.0139, 2.8707, 2.4667], device='cuda:6'), covar=tensor([0.1481, 0.2394, 0.3058, 0.2864, 0.2774, 0.1867, 0.2853, 0.1957], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0191, 0.0237, 0.0256, 0.0252, 0.0208, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:05:22,141 INFO [finetune.py:976] (6/7) Epoch 26, batch 250, loss[loss=0.1299, simple_loss=0.2088, pruned_loss=0.0255, over 4786.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2398, pruned_loss=0.04959, over 686840.21 frames. ], batch size: 26, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:05:48,880 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=143461.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 07:05:51,342 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4730, 2.3523, 1.8450, 0.8958, 2.0945, 1.9559, 1.8314, 2.0728], device='cuda:6'), covar=tensor([0.0974, 0.0717, 0.1520, 0.2025, 0.1301, 0.2120, 0.2027, 0.0952], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0202, 0.0183, 0.0212, 0.0213, 0.0226, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:05:53,067 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.741e+01 1.618e+02 1.961e+02 2.394e+02 5.476e+02, threshold=3.922e+02, percent-clipped=2.0 2023-03-27 07:05:59,886 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=143473.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:06:12,193 INFO [finetune.py:976] (6/7) Epoch 26, batch 300, loss[loss=0.157, simple_loss=0.2438, pruned_loss=0.03504, over 4840.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2448, pruned_loss=0.0509, over 746878.76 frames. ], batch size: 49, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:06:27,536 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7318, 2.1520, 3.0564, 2.0673, 2.6250, 3.0610, 2.0391, 2.9372], device='cuda:6'), covar=tensor([0.1214, 0.2056, 0.1263, 0.1824, 0.0953, 0.1278, 0.2496, 0.0860], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0204, 0.0190, 0.0189, 0.0173, 0.0212, 0.0215, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:06:40,201 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=143534.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:06:44,912 INFO [finetune.py:976] (6/7) Epoch 26, batch 350, loss[loss=0.215, simple_loss=0.2788, pruned_loss=0.07556, over 4795.00 frames. ], tot_loss[loss=0.1756, simple_loss=0.2476, pruned_loss=0.05181, over 794911.03 frames. ], batch size: 51, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:07:03,059 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.441e+02 1.724e+02 2.077e+02 3.544e+02, threshold=3.448e+02, percent-clipped=0.0 2023-03-27 07:07:04,373 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7281, 1.6340, 1.6176, 1.7109, 1.3276, 3.7965, 1.5961, 1.9977], device='cuda:6'), covar=tensor([0.3369, 0.2628, 0.2142, 0.2398, 0.1726, 0.0183, 0.2445, 0.1224], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:07:11,067 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8278, 1.2185, 1.8923, 1.8625, 1.6848, 1.6525, 1.8354, 1.8049], device='cuda:6'), covar=tensor([0.3745, 0.3877, 0.2985, 0.3530, 0.4652, 0.3721, 0.4067, 0.2892], device='cuda:6'), in_proj_covar=tensor([0.0262, 0.0246, 0.0266, 0.0291, 0.0292, 0.0269, 0.0298, 0.0248], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:07:13,956 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7323, 1.5851, 1.4532, 0.9374, 1.7548, 1.8093, 1.8436, 1.4128], device='cuda:6'), covar=tensor([0.0914, 0.0640, 0.0611, 0.0566, 0.0459, 0.0702, 0.0404, 0.0736], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0127, 0.0122, 0.0130, 0.0129, 0.0141, 0.0148], device='cuda:6'), out_proj_covar=tensor([8.9023e-05, 1.0617e-04, 9.0670e-05, 8.5513e-05, 9.1018e-05, 9.1407e-05, 1.0060e-04, 1.0583e-04], device='cuda:6') 2023-03-27 07:07:18,102 INFO [finetune.py:976] (6/7) Epoch 26, batch 400, loss[loss=0.1765, simple_loss=0.258, pruned_loss=0.04744, over 4816.00 frames. ], tot_loss[loss=0.1755, simple_loss=0.2482, pruned_loss=0.05143, over 831464.14 frames. ], batch size: 38, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:07:24,868 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4260, 2.3684, 2.2538, 1.8195, 2.0931, 2.5643, 2.6032, 1.8873], device='cuda:6'), covar=tensor([0.0558, 0.0651, 0.0869, 0.0857, 0.1419, 0.0678, 0.0571, 0.1246], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0137, 0.0142, 0.0119, 0.0128, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:07:40,606 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.66 vs. limit=5.0 2023-03-27 07:07:54,072 INFO [finetune.py:976] (6/7) Epoch 26, batch 450, loss[loss=0.131, simple_loss=0.208, pruned_loss=0.02694, over 4796.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2458, pruned_loss=0.05025, over 858480.72 frames. ], batch size: 29, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:07:58,549 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8431, 1.4624, 1.7611, 1.2381, 1.8038, 1.8434, 1.8140, 1.2181], device='cuda:6'), covar=tensor([0.0663, 0.1093, 0.0870, 0.0982, 0.0855, 0.0737, 0.0729, 0.1969], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0137, 0.0142, 0.0119, 0.0128, 0.0138, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:08:09,090 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5847, 1.2391, 1.8807, 3.0504, 1.9957, 2.3814, 1.0235, 2.7061], device='cuda:6'), covar=tensor([0.2121, 0.2056, 0.1740, 0.1056, 0.1107, 0.1409, 0.2200, 0.0635], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0101, 0.0136, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 07:08:22,188 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.245e+01 1.544e+02 1.809e+02 2.165e+02 3.752e+02, threshold=3.619e+02, percent-clipped=3.0 2023-03-27 07:08:37,481 INFO [finetune.py:976] (6/7) Epoch 26, batch 500, loss[loss=0.1237, simple_loss=0.1999, pruned_loss=0.02373, over 4785.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2433, pruned_loss=0.04974, over 881858.58 frames. ], batch size: 26, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:09:11,112 INFO [finetune.py:976] (6/7) Epoch 26, batch 550, loss[loss=0.1703, simple_loss=0.2462, pruned_loss=0.04724, over 4823.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2411, pruned_loss=0.04901, over 897947.01 frames. ], batch size: 39, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:09:20,248 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=143756.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:09:28,913 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.175e+01 1.443e+02 1.723e+02 1.984e+02 5.074e+02, threshold=3.446e+02, percent-clipped=2.0 2023-03-27 07:09:31,413 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3293, 2.2088, 1.7384, 0.8236, 1.9728, 1.7980, 1.6061, 2.0033], device='cuda:6'), covar=tensor([0.1017, 0.0750, 0.1428, 0.1976, 0.1234, 0.2413, 0.2408, 0.0896], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0212, 0.0212, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:09:41,432 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.28 vs. limit=2.0 2023-03-27 07:09:44,563 INFO [finetune.py:976] (6/7) Epoch 26, batch 600, loss[loss=0.1805, simple_loss=0.2705, pruned_loss=0.04524, over 4836.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2423, pruned_loss=0.04949, over 911323.80 frames. ], batch size: 47, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:10:09,713 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=143829.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:10:12,222 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1975, 2.0236, 1.7791, 2.0043, 1.9525, 1.9008, 1.9800, 2.6919], device='cuda:6'), covar=tensor([0.3629, 0.3876, 0.3287, 0.3432, 0.3854, 0.2329, 0.3528, 0.1647], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0236, 0.0277, 0.0259, 0.0229, 0.0257, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:10:17,583 INFO [finetune.py:976] (6/7) Epoch 26, batch 650, loss[loss=0.198, simple_loss=0.2816, pruned_loss=0.05721, over 4826.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2452, pruned_loss=0.05025, over 919312.97 frames. ], batch size: 38, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:10:41,254 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.600e+02 1.821e+02 2.293e+02 5.159e+02, threshold=3.642e+02, percent-clipped=4.0 2023-03-27 07:10:56,901 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8679, 1.3618, 0.8529, 1.8023, 2.1928, 1.7868, 1.5068, 1.6958], device='cuda:6'), covar=tensor([0.1527, 0.2127, 0.2104, 0.1256, 0.2054, 0.2032, 0.1512, 0.2062], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 07:11:02,811 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.89 vs. limit=5.0 2023-03-27 07:11:12,408 INFO [finetune.py:976] (6/7) Epoch 26, batch 700, loss[loss=0.1488, simple_loss=0.2373, pruned_loss=0.03016, over 4781.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2466, pruned_loss=0.05059, over 927606.10 frames. ], batch size: 25, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:11:13,750 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5717, 1.6277, 2.0217, 1.9163, 1.8080, 3.1505, 1.6753, 1.7396], device='cuda:6'), covar=tensor([0.1010, 0.1535, 0.1354, 0.0797, 0.1320, 0.0316, 0.1232, 0.1583], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:11:42,199 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2536, 2.1222, 1.5460, 0.6442, 1.7857, 1.9506, 1.7838, 1.9140], device='cuda:6'), covar=tensor([0.0961, 0.0751, 0.1432, 0.1850, 0.1217, 0.2188, 0.2088, 0.0834], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0211, 0.0213, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:11:48,952 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.23 vs. limit=2.0 2023-03-27 07:11:51,895 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-27 07:11:52,958 INFO [finetune.py:976] (6/7) Epoch 26, batch 750, loss[loss=0.1825, simple_loss=0.2525, pruned_loss=0.05628, over 4823.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2469, pruned_loss=0.05017, over 933898.05 frames. ], batch size: 38, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:12:09,864 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.567e+02 1.788e+02 2.169e+02 3.888e+02, threshold=3.576e+02, percent-clipped=1.0 2023-03-27 07:12:23,534 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9582, 1.9256, 1.6953, 1.5837, 2.3663, 2.4251, 2.0690, 1.9387], device='cuda:6'), covar=tensor([0.0413, 0.0442, 0.0625, 0.0427, 0.0298, 0.0488, 0.0352, 0.0434], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0107, 0.0147, 0.0111, 0.0102, 0.0116, 0.0103, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.8859e-05, 8.1923e-05, 1.1446e-04, 8.5337e-05, 7.9135e-05, 8.5613e-05, 7.6399e-05, 8.6010e-05], device='cuda:6') 2023-03-27 07:12:26,437 INFO [finetune.py:976] (6/7) Epoch 26, batch 800, loss[loss=0.1511, simple_loss=0.23, pruned_loss=0.0361, over 4735.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.246, pruned_loss=0.04952, over 937091.02 frames. ], batch size: 23, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:12:48,820 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8706, 1.3024, 1.8539, 1.8877, 1.6760, 1.6175, 1.7983, 1.8097], device='cuda:6'), covar=tensor([0.3595, 0.3773, 0.3208, 0.3481, 0.4428, 0.3756, 0.4047, 0.3062], device='cuda:6'), in_proj_covar=tensor([0.0264, 0.0246, 0.0266, 0.0293, 0.0293, 0.0270, 0.0300, 0.0250], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:12:55,323 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2928, 2.1407, 1.5518, 0.6483, 1.7424, 1.9725, 1.7736, 1.9530], device='cuda:6'), covar=tensor([0.0738, 0.0709, 0.1307, 0.1772, 0.1234, 0.1969, 0.2128, 0.0750], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0194, 0.0202, 0.0183, 0.0212, 0.0213, 0.0226, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:12:59,653 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-27 07:13:00,657 INFO [finetune.py:976] (6/7) Epoch 26, batch 850, loss[loss=0.1532, simple_loss=0.224, pruned_loss=0.04118, over 4824.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2439, pruned_loss=0.04912, over 942483.85 frames. ], batch size: 33, lr: 2.98e-03, grad_scale: 16.0 2023-03-27 07:13:09,714 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=144056.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:13:16,917 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.460e+02 1.746e+02 2.115e+02 7.519e+02, threshold=3.492e+02, percent-clipped=2.0 2023-03-27 07:13:43,956 INFO [finetune.py:976] (6/7) Epoch 26, batch 900, loss[loss=0.1703, simple_loss=0.2409, pruned_loss=0.04982, over 4821.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2411, pruned_loss=0.04857, over 946409.71 frames. ], batch size: 38, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:13:51,329 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=144104.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:14:05,684 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0540, 1.9491, 1.6443, 1.8458, 1.8108, 1.8407, 1.8915, 2.5738], device='cuda:6'), covar=tensor([0.3827, 0.4105, 0.3328, 0.3836, 0.3958, 0.2512, 0.3687, 0.1620], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0265, 0.0237, 0.0278, 0.0260, 0.0230, 0.0258, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:14:07,460 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=144129.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:14:16,858 INFO [finetune.py:976] (6/7) Epoch 26, batch 950, loss[loss=0.1747, simple_loss=0.2336, pruned_loss=0.05786, over 4909.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2399, pruned_loss=0.04857, over 949179.06 frames. ], batch size: 37, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:14:33,135 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.128e+02 1.468e+02 1.742e+02 2.065e+02 3.876e+02, threshold=3.485e+02, percent-clipped=2.0 2023-03-27 07:14:39,230 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=144177.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:14:50,350 INFO [finetune.py:976] (6/7) Epoch 26, batch 1000, loss[loss=0.1886, simple_loss=0.2646, pruned_loss=0.05625, over 4854.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2418, pruned_loss=0.04943, over 949104.90 frames. ], batch size: 44, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:15:04,655 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 07:15:20,631 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1056, 2.1051, 1.8579, 2.3291, 2.6428, 2.3138, 1.9392, 1.7626], device='cuda:6'), covar=tensor([0.2218, 0.1861, 0.1902, 0.1657, 0.1617, 0.1077, 0.2164, 0.1849], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0215, 0.0198, 0.0244, 0.0191, 0.0216, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:15:22,310 INFO [finetune.py:976] (6/7) Epoch 26, batch 1050, loss[loss=0.195, simple_loss=0.2762, pruned_loss=0.05692, over 4874.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2447, pruned_loss=0.0498, over 951262.37 frames. ], batch size: 34, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:15:40,004 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.483e+02 1.785e+02 2.219e+02 5.161e+02, threshold=3.570e+02, percent-clipped=2.0 2023-03-27 07:16:01,563 INFO [finetune.py:976] (6/7) Epoch 26, batch 1100, loss[loss=0.1942, simple_loss=0.2648, pruned_loss=0.06178, over 4735.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2468, pruned_loss=0.05061, over 949463.31 frames. ], batch size: 59, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:16:56,475 INFO [finetune.py:976] (6/7) Epoch 26, batch 1150, loss[loss=0.1789, simple_loss=0.251, pruned_loss=0.05336, over 4820.00 frames. ], tot_loss[loss=0.174, simple_loss=0.2473, pruned_loss=0.05034, over 951651.90 frames. ], batch size: 33, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:17:13,863 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.499e+02 1.760e+02 2.197e+02 4.327e+02, threshold=3.521e+02, percent-clipped=2.0 2023-03-27 07:17:30,186 INFO [finetune.py:976] (6/7) Epoch 26, batch 1200, loss[loss=0.1478, simple_loss=0.2269, pruned_loss=0.0344, over 4921.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2457, pruned_loss=0.04945, over 952815.63 frames. ], batch size: 38, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:17:39,287 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-27 07:17:43,699 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=144411.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:18:03,375 INFO [finetune.py:976] (6/7) Epoch 26, batch 1250, loss[loss=0.1637, simple_loss=0.2207, pruned_loss=0.0533, over 4795.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2437, pruned_loss=0.04935, over 953671.22 frames. ], batch size: 51, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:18:03,854 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.76 vs. limit=5.0 2023-03-27 07:18:21,717 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.894e+01 1.606e+02 1.805e+02 2.274e+02 3.881e+02, threshold=3.611e+02, percent-clipped=1.0 2023-03-27 07:18:24,298 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=144472.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:18:26,777 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.21 vs. limit=2.0 2023-03-27 07:18:37,271 INFO [finetune.py:976] (6/7) Epoch 26, batch 1300, loss[loss=0.1444, simple_loss=0.2209, pruned_loss=0.03393, over 4757.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2426, pruned_loss=0.04934, over 955668.52 frames. ], batch size: 27, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:19:00,384 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7021, 1.5483, 1.1215, 0.3117, 1.3544, 1.5007, 1.4242, 1.4933], device='cuda:6'), covar=tensor([0.0885, 0.0769, 0.1203, 0.1828, 0.1271, 0.2126, 0.2240, 0.0863], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0201, 0.0182, 0.0211, 0.0211, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:19:21,353 INFO [finetune.py:976] (6/7) Epoch 26, batch 1350, loss[loss=0.1983, simple_loss=0.287, pruned_loss=0.05484, over 4842.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2423, pruned_loss=0.04934, over 956919.39 frames. ], batch size: 49, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:19:39,470 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.499e+02 1.803e+02 2.073e+02 4.281e+02, threshold=3.607e+02, percent-clipped=1.0 2023-03-27 07:19:52,773 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7912, 1.8040, 1.6075, 1.9716, 2.4502, 2.0729, 1.8597, 1.4947], device='cuda:6'), covar=tensor([0.2278, 0.1882, 0.1942, 0.1689, 0.1583, 0.1128, 0.2146, 0.1992], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0216, 0.0199, 0.0245, 0.0191, 0.0216, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:19:54,498 INFO [finetune.py:976] (6/7) Epoch 26, batch 1400, loss[loss=0.1943, simple_loss=0.2683, pruned_loss=0.06015, over 4826.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.244, pruned_loss=0.0494, over 957374.12 frames. ], batch size: 33, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:20:21,948 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 07:20:22,401 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1795, 2.1380, 1.7119, 2.0166, 2.1061, 2.0465, 2.0988, 2.7391], device='cuda:6'), covar=tensor([0.4006, 0.4336, 0.3565, 0.3923, 0.4225, 0.2586, 0.3682, 0.1888], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0236, 0.0277, 0.0258, 0.0229, 0.0258, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:20:27,738 INFO [finetune.py:976] (6/7) Epoch 26, batch 1450, loss[loss=0.2023, simple_loss=0.2719, pruned_loss=0.06635, over 4817.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2452, pruned_loss=0.04956, over 955783.78 frames. ], batch size: 33, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:20:45,824 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.092e+01 1.571e+02 1.855e+02 2.334e+02 4.645e+02, threshold=3.710e+02, percent-clipped=2.0 2023-03-27 07:21:01,133 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.97 vs. limit=2.0 2023-03-27 07:21:01,362 INFO [finetune.py:976] (6/7) Epoch 26, batch 1500, loss[loss=0.1539, simple_loss=0.2181, pruned_loss=0.04484, over 4250.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2457, pruned_loss=0.0497, over 956311.06 frames. ], batch size: 66, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:21:50,337 INFO [finetune.py:976] (6/7) Epoch 26, batch 1550, loss[loss=0.1466, simple_loss=0.222, pruned_loss=0.03564, over 4713.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2456, pruned_loss=0.04964, over 955068.32 frames. ], batch size: 23, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:21:58,327 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.73 vs. limit=5.0 2023-03-27 07:22:18,423 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=144767.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:22:18,964 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.202e+02 1.547e+02 1.850e+02 2.044e+02 4.068e+02, threshold=3.700e+02, percent-clipped=1.0 2023-03-27 07:22:30,244 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6618, 1.5802, 1.8388, 1.2110, 1.6518, 1.8553, 1.5299, 2.0262], device='cuda:6'), covar=tensor([0.1063, 0.1900, 0.1089, 0.1532, 0.0864, 0.1097, 0.2394, 0.0725], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0204, 0.0189, 0.0188, 0.0172, 0.0211, 0.0213, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:22:35,465 INFO [finetune.py:976] (6/7) Epoch 26, batch 1600, loss[loss=0.1932, simple_loss=0.2526, pruned_loss=0.06685, over 4828.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.244, pruned_loss=0.04958, over 955120.00 frames. ], batch size: 33, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:22:43,854 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 07:23:09,211 INFO [finetune.py:976] (6/7) Epoch 26, batch 1650, loss[loss=0.1705, simple_loss=0.2411, pruned_loss=0.04999, over 4937.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.242, pruned_loss=0.04951, over 954844.63 frames. ], batch size: 38, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:23:26,320 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.503e+01 1.460e+02 1.738e+02 2.010e+02 3.428e+02, threshold=3.475e+02, percent-clipped=0.0 2023-03-27 07:23:42,422 INFO [finetune.py:976] (6/7) Epoch 26, batch 1700, loss[loss=0.1058, simple_loss=0.1726, pruned_loss=0.01955, over 4089.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2384, pruned_loss=0.04819, over 957172.34 frames. ], batch size: 17, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:23:59,178 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7211, 1.5872, 1.5603, 1.6968, 1.3226, 3.5914, 1.4116, 1.9176], device='cuda:6'), covar=tensor([0.3217, 0.2548, 0.2086, 0.2325, 0.1619, 0.0203, 0.2639, 0.1169], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:24:25,823 INFO [finetune.py:976] (6/7) Epoch 26, batch 1750, loss[loss=0.1675, simple_loss=0.2532, pruned_loss=0.04085, over 3804.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2418, pruned_loss=0.04969, over 955265.46 frames. ], batch size: 16, lr: 2.97e-03, grad_scale: 16.0 2023-03-27 07:24:42,910 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.967e+01 1.592e+02 1.823e+02 2.389e+02 4.337e+02, threshold=3.645e+02, percent-clipped=3.0 2023-03-27 07:24:43,061 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8453, 2.0445, 1.7018, 1.6218, 2.4543, 2.5551, 2.0471, 1.8946], device='cuda:6'), covar=tensor([0.0453, 0.0389, 0.0608, 0.0389, 0.0278, 0.0413, 0.0423, 0.0400], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0108, 0.0149, 0.0113, 0.0103, 0.0117, 0.0104, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.9816e-05, 8.2758e-05, 1.1597e-04, 8.6408e-05, 7.9600e-05, 8.6289e-05, 7.7167e-05, 8.6889e-05], device='cuda:6') 2023-03-27 07:24:59,586 INFO [finetune.py:976] (6/7) Epoch 26, batch 1800, loss[loss=0.1691, simple_loss=0.2525, pruned_loss=0.04283, over 4908.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2451, pruned_loss=0.05068, over 955005.23 frames. ], batch size: 37, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:25:02,626 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=144996.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:08,837 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0092, 1.7999, 2.3079, 2.0979, 1.8454, 4.5313, 2.0165, 1.7906], device='cuda:6'), covar=tensor([0.0876, 0.1689, 0.0980, 0.0888, 0.1505, 0.0180, 0.1289, 0.1748], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:25:23,045 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:33,498 INFO [finetune.py:976] (6/7) Epoch 26, batch 1850, loss[loss=0.1819, simple_loss=0.2529, pruned_loss=0.05547, over 4820.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2443, pruned_loss=0.04974, over 952655.07 frames. ], batch size: 25, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:25:43,158 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145057.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:25:48,494 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145066.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:49,083 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145067.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:25:49,576 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.533e+02 1.829e+02 2.183e+02 4.392e+02, threshold=3.659e+02, percent-clipped=3.0 2023-03-27 07:26:04,150 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:26:06,519 INFO [finetune.py:976] (6/7) Epoch 26, batch 1900, loss[loss=0.1717, simple_loss=0.2553, pruned_loss=0.04406, over 4796.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2466, pruned_loss=0.05103, over 951651.12 frames. ], batch size: 51, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:26:19,653 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 07:26:21,299 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145115.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:26:29,578 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145127.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:26:46,686 INFO [finetune.py:976] (6/7) Epoch 26, batch 1950, loss[loss=0.1581, simple_loss=0.2191, pruned_loss=0.04861, over 4903.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2453, pruned_loss=0.05045, over 952329.82 frames. ], batch size: 36, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:27:15,913 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.125e+02 1.577e+02 1.831e+02 2.188e+02 4.363e+02, threshold=3.662e+02, percent-clipped=3.0 2023-03-27 07:27:24,177 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6368, 2.5322, 2.1492, 1.1119, 2.3096, 2.0509, 1.9067, 2.3383], device='cuda:6'), covar=tensor([0.0912, 0.0696, 0.1570, 0.1951, 0.1439, 0.2084, 0.2038, 0.0929], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0191, 0.0199, 0.0181, 0.0209, 0.0209, 0.0223, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:27:32,908 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 07:27:40,102 INFO [finetune.py:976] (6/7) Epoch 26, batch 2000, loss[loss=0.1825, simple_loss=0.2577, pruned_loss=0.0537, over 4795.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2433, pruned_loss=0.04978, over 954678.11 frames. ], batch size: 45, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:28:07,102 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.69 vs. limit=2.0 2023-03-27 07:28:10,243 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 07:28:13,279 INFO [finetune.py:976] (6/7) Epoch 26, batch 2050, loss[loss=0.1599, simple_loss=0.2252, pruned_loss=0.0473, over 4140.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2408, pruned_loss=0.04909, over 955390.02 frames. ], batch size: 65, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:28:27,236 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 07:28:30,377 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.187e+01 1.436e+02 1.792e+02 2.264e+02 4.038e+02, threshold=3.583e+02, percent-clipped=1.0 2023-03-27 07:28:41,830 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5045, 3.3641, 3.1828, 1.5197, 3.4254, 2.5939, 0.7093, 2.3791], device='cuda:6'), covar=tensor([0.2388, 0.2354, 0.1873, 0.3571, 0.1299, 0.1078, 0.4631, 0.1596], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0178, 0.0160, 0.0130, 0.0160, 0.0123, 0.0148, 0.0123], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 07:28:45,862 INFO [finetune.py:976] (6/7) Epoch 26, batch 2100, loss[loss=0.1849, simple_loss=0.2506, pruned_loss=0.05961, over 4828.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2409, pruned_loss=0.04923, over 956382.22 frames. ], batch size: 30, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:29:03,666 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7150, 1.4771, 1.4942, 0.8495, 1.6561, 1.7879, 1.8356, 1.4605], device='cuda:6'), covar=tensor([0.0844, 0.0677, 0.0540, 0.0564, 0.0483, 0.0633, 0.0339, 0.0632], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0128, 0.0123, 0.0130, 0.0129, 0.0142, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9119e-05, 1.0643e-04, 9.1242e-05, 8.6402e-05, 9.1083e-05, 9.1733e-05, 1.0096e-04, 1.0652e-04], device='cuda:6') 2023-03-27 07:29:19,662 INFO [finetune.py:976] (6/7) Epoch 26, batch 2150, loss[loss=0.1737, simple_loss=0.262, pruned_loss=0.04266, over 4836.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2433, pruned_loss=0.05006, over 956205.92 frames. ], batch size: 49, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:29:28,938 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145352.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:29:39,077 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9643, 1.9142, 1.6190, 1.7883, 1.7597, 1.7593, 1.8271, 2.5057], device='cuda:6'), covar=tensor([0.3823, 0.3827, 0.3207, 0.3804, 0.3884, 0.2506, 0.3429, 0.1688], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0237, 0.0278, 0.0260, 0.0230, 0.0258, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:29:43,498 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.1439, 4.4184, 4.7215, 4.9204, 4.8657, 4.6038, 5.2110, 1.5928], device='cuda:6'), covar=tensor([0.0721, 0.0843, 0.0698, 0.0795, 0.1172, 0.1594, 0.0549, 0.5672], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0249, 0.0280, 0.0295, 0.0337, 0.0287, 0.0304, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:29:47,543 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.494e+02 1.709e+02 2.298e+02 6.165e+02, threshold=3.419e+02, percent-clipped=3.0 2023-03-27 07:29:49,489 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145371.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:29:56,782 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145383.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:30:02,627 INFO [finetune.py:976] (6/7) Epoch 26, batch 2200, loss[loss=0.162, simple_loss=0.2373, pruned_loss=0.04337, over 4812.00 frames. ], tot_loss[loss=0.1727, simple_loss=0.245, pruned_loss=0.05023, over 956598.19 frames. ], batch size: 39, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:30:03,785 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3627, 1.3201, 1.6812, 2.4229, 1.6513, 2.1462, 0.9312, 2.1642], device='cuda:6'), covar=tensor([0.1651, 0.1346, 0.1032, 0.0714, 0.0923, 0.1295, 0.1463, 0.0551], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0162, 0.0101, 0.0134, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 07:30:22,402 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 07:30:23,253 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145422.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:30:29,368 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:30:36,242 INFO [finetune.py:976] (6/7) Epoch 26, batch 2250, loss[loss=0.1848, simple_loss=0.2573, pruned_loss=0.05616, over 4826.00 frames. ], tot_loss[loss=0.1744, simple_loss=0.2465, pruned_loss=0.0512, over 957077.04 frames. ], batch size: 38, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:30:37,622 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5045, 1.7913, 1.4525, 1.4605, 2.0008, 2.0246, 1.8048, 1.7453], device='cuda:6'), covar=tensor([0.0578, 0.0354, 0.0623, 0.0395, 0.0335, 0.0619, 0.0324, 0.0431], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0109, 0.0149, 0.0113, 0.0103, 0.0118, 0.0104, 0.0115], device='cuda:6'), out_proj_covar=tensor([8.0326e-05, 8.3024e-05, 1.1643e-04, 8.6571e-05, 7.9891e-05, 8.6772e-05, 7.7549e-05, 8.7307e-05], device='cuda:6') 2023-03-27 07:30:44,582 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4565, 1.5074, 2.0941, 1.7327, 1.7978, 3.9136, 1.6188, 1.6409], device='cuda:6'), covar=tensor([0.1055, 0.1779, 0.1265, 0.0952, 0.1431, 0.0188, 0.1403, 0.1757], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0072, 0.0076, 0.0090, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:30:53,947 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.189e+01 1.488e+02 1.760e+02 2.143e+02 3.776e+02, threshold=3.521e+02, percent-clipped=2.0 2023-03-27 07:31:08,990 INFO [finetune.py:976] (6/7) Epoch 26, batch 2300, loss[loss=0.1675, simple_loss=0.2411, pruned_loss=0.04701, over 4776.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2445, pruned_loss=0.04966, over 953975.11 frames. ], batch size: 28, lr: 2.97e-03, grad_scale: 32.0 2023-03-27 07:31:41,343 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6962, 3.8001, 3.5792, 1.6916, 3.8612, 2.9101, 1.0453, 2.7138], device='cuda:6'), covar=tensor([0.2124, 0.1838, 0.1578, 0.3549, 0.1031, 0.0962, 0.4373, 0.1430], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0179, 0.0160, 0.0130, 0.0161, 0.0124, 0.0149, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 07:31:42,489 INFO [finetune.py:976] (6/7) Epoch 26, batch 2350, loss[loss=0.1432, simple_loss=0.2107, pruned_loss=0.03784, over 4734.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2428, pruned_loss=0.04869, over 955902.78 frames. ], batch size: 23, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:31:53,147 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7253, 2.8037, 2.5930, 1.7466, 2.5909, 2.8655, 2.7609, 2.3370], device='cuda:6'), covar=tensor([0.0535, 0.0547, 0.0785, 0.0941, 0.0863, 0.0745, 0.0642, 0.0976], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0119, 0.0128, 0.0138, 0.0139, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:32:06,623 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.544e+02 1.840e+02 2.226e+02 4.643e+02, threshold=3.680e+02, percent-clipped=1.0 2023-03-27 07:32:34,357 INFO [finetune.py:976] (6/7) Epoch 26, batch 2400, loss[loss=0.1827, simple_loss=0.25, pruned_loss=0.05768, over 4822.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2407, pruned_loss=0.04843, over 955829.95 frames. ], batch size: 51, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:32:34,446 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1960, 3.6168, 3.8614, 4.0165, 3.9439, 3.6661, 4.2360, 1.3826], device='cuda:6'), covar=tensor([0.0824, 0.0922, 0.0868, 0.1073, 0.1346, 0.1628, 0.0817, 0.5804], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0249, 0.0280, 0.0296, 0.0337, 0.0287, 0.0304, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:32:35,673 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145594.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:17,103 INFO [finetune.py:976] (6/7) Epoch 26, batch 2450, loss[loss=0.1435, simple_loss=0.2223, pruned_loss=0.0324, over 4914.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2379, pruned_loss=0.04749, over 954219.50 frames. ], batch size: 32, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:33:17,815 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:23,856 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145652.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:33:26,163 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3653, 1.5307, 0.7418, 2.0028, 2.4098, 1.8348, 1.8468, 1.9164], device='cuda:6'), covar=tensor([0.1240, 0.1916, 0.2044, 0.1072, 0.1798, 0.1817, 0.1303, 0.1898], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 07:33:26,202 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145655.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:33:34,919 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.521e+01 1.449e+02 1.824e+02 2.163e+02 4.630e+02, threshold=3.648e+02, percent-clipped=2.0 2023-03-27 07:33:37,604 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 07:33:45,644 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145683.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:51,048 INFO [finetune.py:976] (6/7) Epoch 26, batch 2500, loss[loss=0.1868, simple_loss=0.2537, pruned_loss=0.05991, over 4867.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2399, pruned_loss=0.04818, over 954712.41 frames. ], batch size: 31, lr: 2.96e-03, grad_scale: 32.0 2023-03-27 07:33:55,945 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145700.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:33:58,911 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145704.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:01,234 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9897, 1.8092, 1.5659, 1.5687, 1.7212, 1.7102, 1.7485, 2.4476], device='cuda:6'), covar=tensor([0.3334, 0.3746, 0.2988, 0.3646, 0.3601, 0.2189, 0.3335, 0.1608], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0235, 0.0275, 0.0258, 0.0228, 0.0257, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:34:11,657 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=145722.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:12,374 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.83 vs. limit=5.0 2023-03-27 07:34:15,143 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145727.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:17,567 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145731.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:34:24,603 INFO [finetune.py:976] (6/7) Epoch 26, batch 2550, loss[loss=0.193, simple_loss=0.2621, pruned_loss=0.06192, over 4891.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2427, pruned_loss=0.04891, over 953493.70 frames. ], batch size: 32, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:34:42,329 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.531e+01 1.551e+02 1.807e+02 2.106e+02 4.459e+02, threshold=3.615e+02, percent-clipped=2.0 2023-03-27 07:34:43,553 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=145770.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:35:08,868 INFO [finetune.py:976] (6/7) Epoch 26, batch 2600, loss[loss=0.2001, simple_loss=0.2775, pruned_loss=0.06133, over 4819.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2447, pruned_loss=0.04955, over 956528.33 frames. ], batch size: 33, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:35:28,226 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=145821.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:35:42,707 INFO [finetune.py:976] (6/7) Epoch 26, batch 2650, loss[loss=0.1853, simple_loss=0.2526, pruned_loss=0.059, over 4195.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2463, pruned_loss=0.05045, over 955590.34 frames. ], batch size: 65, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:35:55,839 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3405, 1.3444, 1.4980, 1.4556, 1.5718, 2.9632, 1.3280, 1.4692], device='cuda:6'), covar=tensor([0.1013, 0.1833, 0.1127, 0.0998, 0.1533, 0.0276, 0.1522, 0.1821], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:36:00,020 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.986e+01 1.512e+02 1.783e+02 2.110e+02 4.476e+02, threshold=3.566e+02, percent-clipped=1.0 2023-03-27 07:36:09,562 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=145882.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 07:36:16,424 INFO [finetune.py:976] (6/7) Epoch 26, batch 2700, loss[loss=0.1522, simple_loss=0.2281, pruned_loss=0.03814, over 4813.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2455, pruned_loss=0.04976, over 954787.31 frames. ], batch size: 38, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:36:48,576 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1236, 1.3505, 1.4795, 1.3607, 1.4540, 2.5344, 1.2310, 1.4065], device='cuda:6'), covar=tensor([0.1137, 0.2354, 0.1019, 0.0998, 0.1891, 0.0414, 0.1897, 0.2275], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 07:36:49,659 INFO [finetune.py:976] (6/7) Epoch 26, batch 2750, loss[loss=0.1661, simple_loss=0.2369, pruned_loss=0.04763, over 4895.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2431, pruned_loss=0.04966, over 953886.95 frames. ], batch size: 43, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:36:53,385 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9955, 1.6054, 2.1219, 2.0538, 1.8621, 1.8284, 1.9929, 2.0133], device='cuda:6'), covar=tensor([0.4175, 0.4085, 0.3259, 0.3718, 0.5023, 0.4023, 0.4621, 0.2968], device='cuda:6'), in_proj_covar=tensor([0.0266, 0.0247, 0.0266, 0.0294, 0.0294, 0.0271, 0.0300, 0.0251], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:36:55,133 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145950.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:36:56,762 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 07:37:00,654 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4197, 1.3551, 1.3936, 0.7615, 1.5852, 1.6708, 1.7034, 1.3281], device='cuda:6'), covar=tensor([0.1090, 0.0760, 0.0630, 0.0636, 0.0524, 0.0643, 0.0393, 0.0688], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0128, 0.0122, 0.0130, 0.0129, 0.0142, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.9061e-05, 1.0631e-04, 9.1093e-05, 8.5941e-05, 9.1062e-05, 9.1470e-05, 1.0093e-04, 1.0625e-04], device='cuda:6') 2023-03-27 07:37:07,588 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.569e+02 1.804e+02 2.200e+02 3.850e+02, threshold=3.609e+02, percent-clipped=2.0 2023-03-27 07:37:29,449 INFO [finetune.py:976] (6/7) Epoch 26, batch 2800, loss[loss=0.1612, simple_loss=0.2304, pruned_loss=0.04602, over 4851.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2401, pruned_loss=0.04887, over 954919.10 frames. ], batch size: 44, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:37:39,467 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=145999.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:38:14,435 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146027.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:38:16,786 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9746, 1.7837, 1.9410, 1.1692, 1.9999, 2.0299, 1.9365, 1.6711], device='cuda:6'), covar=tensor([0.0592, 0.0757, 0.0684, 0.0992, 0.0793, 0.0663, 0.0685, 0.1122], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0120, 0.0128, 0.0138, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:38:17,433 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4465, 2.4311, 1.8733, 2.4149, 2.4068, 2.1294, 2.7103, 2.4634], device='cuda:6'), covar=tensor([0.1243, 0.2032, 0.3004, 0.2363, 0.2367, 0.1576, 0.2965, 0.1669], device='cuda:6'), in_proj_covar=tensor([0.0187, 0.0189, 0.0235, 0.0251, 0.0249, 0.0206, 0.0214, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:38:21,634 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.58 vs. limit=2.0 2023-03-27 07:38:24,986 INFO [finetune.py:976] (6/7) Epoch 26, batch 2850, loss[loss=0.1363, simple_loss=0.21, pruned_loss=0.03129, over 4764.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2383, pruned_loss=0.04828, over 955016.67 frames. ], batch size: 26, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:38:37,434 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2526, 4.8891, 4.6491, 2.7126, 5.0062, 3.9167, 1.0498, 3.6276], device='cuda:6'), covar=tensor([0.2027, 0.1911, 0.1321, 0.2911, 0.0757, 0.0808, 0.4473, 0.1272], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0177, 0.0158, 0.0129, 0.0159, 0.0122, 0.0147, 0.0122], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 07:38:42,241 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.185e+01 1.523e+02 1.819e+02 2.110e+02 4.930e+02, threshold=3.638e+02, percent-clipped=2.0 2023-03-27 07:38:46,430 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:38:58,299 INFO [finetune.py:976] (6/7) Epoch 26, batch 2900, loss[loss=0.1486, simple_loss=0.2328, pruned_loss=0.03219, over 4697.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2404, pruned_loss=0.0492, over 953864.22 frames. ], batch size: 23, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:39:31,498 INFO [finetune.py:976] (6/7) Epoch 26, batch 2950, loss[loss=0.1501, simple_loss=0.2164, pruned_loss=0.04189, over 4116.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2426, pruned_loss=0.04913, over 954814.32 frames. ], batch size: 17, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:39:36,384 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146149.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:39:49,295 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.099e+02 1.550e+02 1.859e+02 2.106e+02 3.478e+02, threshold=3.719e+02, percent-clipped=0.0 2023-03-27 07:39:54,692 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146177.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 07:40:04,822 INFO [finetune.py:976] (6/7) Epoch 26, batch 3000, loss[loss=0.2229, simple_loss=0.2864, pruned_loss=0.07974, over 4262.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2453, pruned_loss=0.05066, over 954453.27 frames. ], batch size: 65, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:40:04,822 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 07:40:19,935 INFO [finetune.py:1010] (6/7) Epoch 26, validation: loss=0.1577, simple_loss=0.2252, pruned_loss=0.04507, over 2265189.00 frames. 2023-03-27 07:40:19,936 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 07:40:35,403 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146210.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:40:40,651 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0495, 3.0609, 2.9195, 1.9433, 2.8297, 3.2479, 3.0720, 2.5568], device='cuda:6'), covar=tensor([0.0494, 0.0490, 0.0586, 0.0829, 0.0613, 0.0547, 0.0516, 0.0934], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0120, 0.0128, 0.0138, 0.0139, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:40:54,467 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6568, 1.4815, 2.1189, 1.9188, 1.6298, 3.6939, 1.4908, 1.6463], device='cuda:6'), covar=tensor([0.0948, 0.1835, 0.1069, 0.0908, 0.1622, 0.0267, 0.1498, 0.1762], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 07:40:56,715 INFO [finetune.py:976] (6/7) Epoch 26, batch 3050, loss[loss=0.1679, simple_loss=0.2156, pruned_loss=0.06007, over 4171.00 frames. ], tot_loss[loss=0.1737, simple_loss=0.2467, pruned_loss=0.05035, over 954738.82 frames. ], batch size: 18, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:41:02,578 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146250.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:41:14,895 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.646e+01 1.408e+02 1.795e+02 2.163e+02 4.679e+02, threshold=3.589e+02, percent-clipped=3.0 2023-03-27 07:41:29,847 INFO [finetune.py:976] (6/7) Epoch 26, batch 3100, loss[loss=0.1643, simple_loss=0.2368, pruned_loss=0.04589, over 4930.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2451, pruned_loss=0.05023, over 953206.32 frames. ], batch size: 38, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:41:34,032 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146298.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:41:34,706 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146299.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:41:45,486 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.41 vs. limit=2.0 2023-03-27 07:41:49,649 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 07:41:55,487 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9451, 1.8271, 1.6394, 2.1261, 2.4957, 2.0868, 1.8325, 1.5708], device='cuda:6'), covar=tensor([0.1991, 0.1788, 0.1811, 0.1505, 0.1543, 0.1151, 0.2179, 0.1809], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0199, 0.0247, 0.0192, 0.0219, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:42:02,611 INFO [finetune.py:976] (6/7) Epoch 26, batch 3150, loss[loss=0.1396, simple_loss=0.2226, pruned_loss=0.02829, over 4927.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2415, pruned_loss=0.04918, over 953221.84 frames. ], batch size: 37, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:42:06,562 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146347.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:42:06,621 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3192, 2.3889, 2.2782, 1.5198, 2.3583, 2.4845, 2.3519, 2.0317], device='cuda:6'), covar=tensor([0.0561, 0.0511, 0.0692, 0.0838, 0.0704, 0.0654, 0.0623, 0.1022], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0120, 0.0128, 0.0138, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:42:21,143 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.044e+02 1.464e+02 1.851e+02 2.163e+02 3.423e+02, threshold=3.701e+02, percent-clipped=0.0 2023-03-27 07:42:38,056 INFO [finetune.py:976] (6/7) Epoch 26, batch 3200, loss[loss=0.1393, simple_loss=0.2098, pruned_loss=0.03436, over 4697.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2403, pruned_loss=0.04992, over 953330.88 frames. ], batch size: 23, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:43:35,345 INFO [finetune.py:976] (6/7) Epoch 26, batch 3250, loss[loss=0.2017, simple_loss=0.2783, pruned_loss=0.06259, over 4910.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2399, pruned_loss=0.04921, over 953943.83 frames. ], batch size: 37, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:43:44,313 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5217, 1.4284, 1.3608, 1.4400, 0.9377, 3.2267, 1.0990, 1.4144], device='cuda:6'), covar=tensor([0.3236, 0.2566, 0.2195, 0.2436, 0.1936, 0.0215, 0.2800, 0.1380], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0115, 0.0120, 0.0123, 0.0112, 0.0095, 0.0093, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:43:53,730 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.451e+02 1.808e+02 2.175e+02 4.535e+02, threshold=3.616e+02, percent-clipped=3.0 2023-03-27 07:43:58,672 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146477.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 07:44:08,659 INFO [finetune.py:976] (6/7) Epoch 26, batch 3300, loss[loss=0.1542, simple_loss=0.2286, pruned_loss=0.03987, over 4758.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2436, pruned_loss=0.05023, over 953578.74 frames. ], batch size: 26, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:44:17,124 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146505.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:44:30,707 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146525.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:44:31,352 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7037, 1.9829, 1.5569, 1.6127, 2.2084, 2.3029, 1.8769, 1.8996], device='cuda:6'), covar=tensor([0.0483, 0.0333, 0.0591, 0.0374, 0.0309, 0.0512, 0.0366, 0.0365], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0109, 0.0149, 0.0113, 0.0103, 0.0118, 0.0105, 0.0115], device='cuda:6'), out_proj_covar=tensor([8.0259e-05, 8.3016e-05, 1.1606e-04, 8.6360e-05, 7.9641e-05, 8.6885e-05, 7.7825e-05, 8.7094e-05], device='cuda:6') 2023-03-27 07:44:33,768 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2066, 2.1950, 1.9810, 2.3677, 2.1105, 2.1867, 2.0914, 2.7906], device='cuda:6'), covar=tensor([0.3207, 0.4286, 0.3126, 0.3658, 0.4447, 0.2145, 0.4364, 0.1451], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0237, 0.0277, 0.0259, 0.0230, 0.0257, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:44:41,544 INFO [finetune.py:976] (6/7) Epoch 26, batch 3350, loss[loss=0.1744, simple_loss=0.2339, pruned_loss=0.05739, over 4774.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2446, pruned_loss=0.05003, over 955218.69 frames. ], batch size: 26, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:44:48,836 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8974, 1.3467, 1.8595, 1.9213, 1.6869, 1.6283, 1.9181, 1.7895], device='cuda:6'), covar=tensor([0.3537, 0.3560, 0.3198, 0.3160, 0.4443, 0.3642, 0.3744, 0.2851], device='cuda:6'), in_proj_covar=tensor([0.0267, 0.0248, 0.0269, 0.0296, 0.0296, 0.0272, 0.0303, 0.0253], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:45:00,358 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.852e+01 1.562e+02 1.841e+02 2.282e+02 4.006e+02, threshold=3.682e+02, percent-clipped=1.0 2023-03-27 07:45:06,908 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.83 vs. limit=5.0 2023-03-27 07:45:14,910 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146591.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:45:15,431 INFO [finetune.py:976] (6/7) Epoch 26, batch 3400, loss[loss=0.1756, simple_loss=0.241, pruned_loss=0.05508, over 4243.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.247, pruned_loss=0.05072, over 955204.02 frames. ], batch size: 66, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:45:21,729 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4334, 1.3312, 1.4704, 1.5893, 1.5256, 2.9452, 1.2438, 1.4636], device='cuda:6'), covar=tensor([0.1005, 0.1842, 0.1157, 0.0937, 0.1647, 0.0300, 0.1600, 0.1906], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0090, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:45:48,144 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146626.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:45:58,655 INFO [finetune.py:976] (6/7) Epoch 26, batch 3450, loss[loss=0.179, simple_loss=0.253, pruned_loss=0.05249, over 4910.00 frames. ], tot_loss[loss=0.1742, simple_loss=0.2469, pruned_loss=0.05077, over 954823.65 frames. ], batch size: 36, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:46:01,773 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2769, 3.7425, 3.9482, 4.1656, 4.0506, 3.7408, 4.3633, 1.4056], device='cuda:6'), covar=tensor([0.0748, 0.0871, 0.0809, 0.0916, 0.1175, 0.1612, 0.0680, 0.5741], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0247, 0.0279, 0.0294, 0.0336, 0.0285, 0.0303, 0.0299], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:46:04,883 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146652.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:46:17,058 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.450e+02 1.708e+02 2.017e+02 4.995e+02, threshold=3.417e+02, percent-clipped=1.0 2023-03-27 07:46:28,556 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146687.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:46:32,389 INFO [finetune.py:976] (6/7) Epoch 26, batch 3500, loss[loss=0.1828, simple_loss=0.2532, pruned_loss=0.05621, over 4906.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2445, pruned_loss=0.05009, over 956557.44 frames. ], batch size: 46, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:46:57,590 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6479, 1.4802, 1.4317, 1.4899, 1.8887, 1.8709, 1.5589, 1.3986], device='cuda:6'), covar=tensor([0.0314, 0.0345, 0.0612, 0.0308, 0.0218, 0.0396, 0.0325, 0.0414], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0107, 0.0147, 0.0112, 0.0102, 0.0117, 0.0104, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.9436e-05, 8.2222e-05, 1.1496e-04, 8.5720e-05, 7.8779e-05, 8.6252e-05, 7.7371e-05, 8.6233e-05], device='cuda:6') 2023-03-27 07:47:00,382 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 07:47:05,274 INFO [finetune.py:976] (6/7) Epoch 26, batch 3550, loss[loss=0.1615, simple_loss=0.2292, pruned_loss=0.04686, over 4816.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2409, pruned_loss=0.04869, over 958272.54 frames. ], batch size: 25, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:47:13,487 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.12 vs. limit=5.0 2023-03-27 07:47:22,722 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.051e+02 1.430e+02 1.748e+02 2.315e+02 5.079e+02, threshold=3.497e+02, percent-clipped=6.0 2023-03-27 07:47:38,113 INFO [finetune.py:976] (6/7) Epoch 26, batch 3600, loss[loss=0.16, simple_loss=0.2424, pruned_loss=0.03883, over 4822.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.238, pruned_loss=0.0473, over 958323.50 frames. ], batch size: 40, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:47:47,299 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=146805.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:48:26,739 INFO [finetune.py:976] (6/7) Epoch 26, batch 3650, loss[loss=0.181, simple_loss=0.2555, pruned_loss=0.05322, over 4780.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2411, pruned_loss=0.04824, over 957950.65 frames. ], batch size: 29, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:48:28,122 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=146844.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:48:39,244 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=146853.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:48:53,567 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.488e+01 1.511e+02 1.813e+02 2.229e+02 3.524e+02, threshold=3.627e+02, percent-clipped=1.0 2023-03-27 07:49:12,954 INFO [finetune.py:976] (6/7) Epoch 26, batch 3700, loss[loss=0.1849, simple_loss=0.2547, pruned_loss=0.05751, over 4892.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2446, pruned_loss=0.04877, over 957759.58 frames. ], batch size: 35, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:49:21,422 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=146905.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:49:46,521 INFO [finetune.py:976] (6/7) Epoch 26, batch 3750, loss[loss=0.1635, simple_loss=0.2397, pruned_loss=0.04366, over 4810.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2455, pruned_loss=0.04949, over 958962.18 frames. ], batch size: 39, lr: 2.96e-03, grad_scale: 16.0 2023-03-27 07:49:49,609 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146947.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:49:50,834 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1978, 3.6378, 3.8313, 4.0255, 3.9635, 3.6872, 4.2804, 1.3680], device='cuda:6'), covar=tensor([0.0807, 0.0977, 0.0867, 0.1007, 0.1266, 0.1619, 0.0708, 0.5787], device='cuda:6'), in_proj_covar=tensor([0.0350, 0.0248, 0.0282, 0.0295, 0.0339, 0.0287, 0.0304, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:50:01,250 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 07:50:01,656 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.08 vs. limit=5.0 2023-03-27 07:50:03,845 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.820e+01 1.503e+02 1.791e+02 2.461e+02 5.017e+02, threshold=3.581e+02, percent-clipped=5.0 2023-03-27 07:50:12,673 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=146982.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 07:50:19,582 INFO [finetune.py:976] (6/7) Epoch 26, batch 3800, loss[loss=0.2275, simple_loss=0.2942, pruned_loss=0.08043, over 4813.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2464, pruned_loss=0.04931, over 958825.94 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:50:26,053 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 07:50:29,825 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.1842, 2.7807, 2.5880, 1.4335, 2.7648, 2.2492, 2.2777, 2.5235], device='cuda:6'), covar=tensor([0.0823, 0.0814, 0.1707, 0.2051, 0.1749, 0.2360, 0.1971, 0.1133], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0201, 0.0183, 0.0211, 0.0212, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:50:55,309 INFO [finetune.py:976] (6/7) Epoch 26, batch 3850, loss[loss=0.1625, simple_loss=0.2442, pruned_loss=0.04044, over 4800.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2454, pruned_loss=0.04914, over 958642.14 frames. ], batch size: 51, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:51:21,146 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.517e+02 1.854e+02 2.266e+02 5.483e+02, threshold=3.707e+02, percent-clipped=2.0 2023-03-27 07:51:21,328 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 07:51:21,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4590, 2.2800, 1.8414, 2.3971, 2.3091, 2.0342, 2.6395, 2.3902], device='cuda:6'), covar=tensor([0.1246, 0.1922, 0.2872, 0.2478, 0.2377, 0.1597, 0.2466, 0.1677], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0190, 0.0235, 0.0252, 0.0249, 0.0206, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:51:37,020 INFO [finetune.py:976] (6/7) Epoch 26, batch 3900, loss[loss=0.1303, simple_loss=0.2134, pruned_loss=0.02359, over 4755.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2436, pruned_loss=0.04917, over 958730.99 frames. ], batch size: 28, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:51:39,620 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8346, 2.5198, 2.2699, 2.9614, 2.6491, 2.3926, 3.0984, 2.8032], device='cuda:6'), covar=tensor([0.1163, 0.2048, 0.2747, 0.2159, 0.2388, 0.1570, 0.2735, 0.1624], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0190, 0.0236, 0.0252, 0.0249, 0.0206, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:52:09,605 INFO [finetune.py:976] (6/7) Epoch 26, batch 3950, loss[loss=0.1573, simple_loss=0.2351, pruned_loss=0.03975, over 4770.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2394, pruned_loss=0.04742, over 956864.80 frames. ], batch size: 28, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:52:27,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.492e+02 1.682e+02 1.976e+02 2.814e+02, threshold=3.365e+02, percent-clipped=0.0 2023-03-27 07:52:37,569 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8159, 1.6347, 1.4679, 1.3598, 1.5718, 1.5815, 1.6081, 2.1816], device='cuda:6'), covar=tensor([0.3554, 0.3337, 0.3018, 0.3182, 0.3499, 0.2376, 0.3021, 0.1702], device='cuda:6'), in_proj_covar=tensor([0.0292, 0.0267, 0.0239, 0.0279, 0.0262, 0.0231, 0.0260, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:52:42,789 INFO [finetune.py:976] (6/7) Epoch 26, batch 4000, loss[loss=0.1797, simple_loss=0.2628, pruned_loss=0.04836, over 4871.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2393, pruned_loss=0.04761, over 956616.58 frames. ], batch size: 44, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:52:48,864 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147200.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:53:15,800 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147241.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:53:16,327 INFO [finetune.py:976] (6/7) Epoch 26, batch 4050, loss[loss=0.1267, simple_loss=0.2031, pruned_loss=0.02512, over 4822.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2413, pruned_loss=0.04823, over 954545.04 frames. ], batch size: 25, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:53:19,986 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0092, 1.9966, 1.7927, 1.9007, 1.3128, 4.5912, 1.7926, 2.0429], device='cuda:6'), covar=tensor([0.3038, 0.2299, 0.1944, 0.2225, 0.1705, 0.0147, 0.2294, 0.1161], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 07:53:21,627 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147247.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:53:43,023 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7079, 4.1292, 4.3009, 4.4967, 4.4412, 4.1888, 4.8071, 1.8801], device='cuda:6'), covar=tensor([0.0709, 0.1000, 0.0783, 0.0895, 0.1157, 0.1531, 0.0597, 0.5305], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0249, 0.0282, 0.0296, 0.0339, 0.0288, 0.0305, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:53:48,586 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.158e+02 1.660e+02 1.920e+02 2.375e+02 4.575e+02, threshold=3.840e+02, percent-clipped=6.0 2023-03-27 07:53:54,707 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6512, 1.2883, 0.8390, 1.5906, 2.0246, 1.4177, 1.4445, 1.6465], device='cuda:6'), covar=tensor([0.1596, 0.2043, 0.1941, 0.1277, 0.2157, 0.2118, 0.1571, 0.1988], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0095, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 07:53:58,887 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:08,319 INFO [finetune.py:976] (6/7) Epoch 26, batch 4100, loss[loss=0.1638, simple_loss=0.2524, pruned_loss=0.03759, over 4906.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2424, pruned_loss=0.0483, over 954065.10 frames. ], batch size: 36, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:54:13,960 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147295.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:18,783 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147302.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:24,627 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147310.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:37,227 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147330.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:54:44,912 INFO [finetune.py:976] (6/7) Epoch 26, batch 4150, loss[loss=0.207, simple_loss=0.2685, pruned_loss=0.07276, over 4860.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2443, pruned_loss=0.04888, over 953716.23 frames. ], batch size: 31, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:55:01,743 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7875, 1.5618, 2.0427, 3.3961, 2.3025, 2.3897, 1.1748, 2.8886], device='cuda:6'), covar=tensor([0.1677, 0.1332, 0.1354, 0.0506, 0.0744, 0.1426, 0.1711, 0.0415], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0131, 0.0162, 0.0100, 0.0133, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 07:55:03,460 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.520e+02 1.873e+02 2.208e+02 5.004e+02, threshold=3.746e+02, percent-clipped=1.0 2023-03-27 07:55:05,296 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147371.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:55:18,279 INFO [finetune.py:976] (6/7) Epoch 26, batch 4200, loss[loss=0.1751, simple_loss=0.2426, pruned_loss=0.05381, over 4913.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2443, pruned_loss=0.04855, over 953429.02 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:55:28,579 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147407.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:55:46,442 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147434.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:55:51,223 INFO [finetune.py:976] (6/7) Epoch 26, batch 4250, loss[loss=0.1518, simple_loss=0.2209, pruned_loss=0.04138, over 4769.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2423, pruned_loss=0.04836, over 953667.92 frames. ], batch size: 27, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:56:01,248 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8995, 1.7946, 1.6462, 2.0387, 2.3841, 2.0459, 1.8009, 1.6072], device='cuda:6'), covar=tensor([0.2006, 0.1918, 0.1793, 0.1570, 0.1557, 0.1158, 0.2255, 0.1805], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0215, 0.0198, 0.0245, 0.0190, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:56:16,336 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:56:16,791 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.473e+02 1.815e+02 2.255e+02 8.587e+02, threshold=3.630e+02, percent-clipped=2.0 2023-03-27 07:56:34,761 INFO [finetune.py:976] (6/7) Epoch 26, batch 4300, loss[loss=0.1926, simple_loss=0.2577, pruned_loss=0.06372, over 4864.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2406, pruned_loss=0.04856, over 955218.52 frames. ], batch size: 49, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:56:36,683 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2295, 1.9415, 2.4940, 4.0985, 2.8201, 2.7381, 0.8008, 3.4208], device='cuda:6'), covar=tensor([0.1596, 0.1320, 0.1384, 0.0553, 0.0735, 0.1734, 0.2150, 0.0391], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0101, 0.0134, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 07:56:36,729 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147495.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:56:37,813 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1892, 1.8779, 2.4709, 4.1922, 2.7986, 2.8697, 0.9331, 3.4578], device='cuda:6'), covar=tensor([0.1672, 0.1332, 0.1429, 0.0535, 0.0764, 0.1519, 0.1914, 0.0386], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0101, 0.0134, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 07:56:40,204 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147500.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:57:07,245 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-27 07:57:08,536 INFO [finetune.py:976] (6/7) Epoch 26, batch 4350, loss[loss=0.1358, simple_loss=0.2065, pruned_loss=0.03251, over 4766.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2364, pruned_loss=0.04703, over 952544.85 frames. ], batch size: 27, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:57:11,718 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7643, 1.8086, 2.1519, 1.3172, 1.8869, 2.0144, 1.6746, 2.1977], device='cuda:6'), covar=tensor([0.1379, 0.1967, 0.1290, 0.1833, 0.1051, 0.1466, 0.2695, 0.0918], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0191, 0.0177, 0.0215, 0.0218, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:57:12,268 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147548.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:57:24,761 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6932, 2.3765, 2.0311, 1.0678, 2.2634, 2.0217, 1.8263, 2.2415], device='cuda:6'), covar=tensor([0.0669, 0.0860, 0.1576, 0.2012, 0.1251, 0.2174, 0.2223, 0.0872], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0193, 0.0201, 0.0183, 0.0210, 0.0212, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:57:26,970 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.870e+01 1.462e+02 1.667e+02 1.917e+02 5.708e+02, threshold=3.333e+02, percent-clipped=2.0 2023-03-27 07:57:40,802 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-27 07:57:42,395 INFO [finetune.py:976] (6/7) Epoch 26, batch 4400, loss[loss=0.1821, simple_loss=0.2531, pruned_loss=0.05554, over 4824.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2384, pruned_loss=0.04794, over 953351.21 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:57:45,507 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147597.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:57:48,349 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 07:58:01,585 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3059, 2.9454, 3.0927, 3.1983, 3.0976, 2.9006, 3.3752, 0.9578], device='cuda:6'), covar=tensor([0.1233, 0.1078, 0.1160, 0.1362, 0.1738, 0.1861, 0.1102, 0.5940], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0250, 0.0284, 0.0298, 0.0340, 0.0289, 0.0308, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 07:58:04,857 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.49 vs. limit=5.0 2023-03-27 07:58:06,799 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4196, 1.5250, 1.5248, 0.8077, 1.7043, 1.8713, 1.9200, 1.4101], device='cuda:6'), covar=tensor([0.0916, 0.0726, 0.0558, 0.0559, 0.0438, 0.0564, 0.0306, 0.0867], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0128, 0.0122, 0.0130, 0.0129, 0.0141, 0.0149], device='cuda:6'), out_proj_covar=tensor([8.8561e-05, 1.0649e-04, 9.1059e-05, 8.5746e-05, 9.0795e-05, 9.1256e-05, 1.0036e-04, 1.0627e-04], device='cuda:6') 2023-03-27 07:58:16,297 INFO [finetune.py:976] (6/7) Epoch 26, batch 4450, loss[loss=0.1791, simple_loss=0.2541, pruned_loss=0.05203, over 4897.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2414, pruned_loss=0.04902, over 951150.37 frames. ], batch size: 36, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:58:23,538 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-27 07:58:27,298 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147659.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:58:34,813 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147666.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:58:36,564 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.542e+02 1.864e+02 2.192e+02 3.736e+02, threshold=3.727e+02, percent-clipped=4.0 2023-03-27 07:59:06,168 INFO [finetune.py:976] (6/7) Epoch 26, batch 4500, loss[loss=0.1574, simple_loss=0.2285, pruned_loss=0.04313, over 4816.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2439, pruned_loss=0.04987, over 951461.20 frames. ], batch size: 25, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 07:59:34,240 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 07:59:37,003 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147720.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 07:59:52,140 INFO [finetune.py:976] (6/7) Epoch 26, batch 4550, loss[loss=0.1729, simple_loss=0.2517, pruned_loss=0.04708, over 4914.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2452, pruned_loss=0.05047, over 951151.51 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:00:04,824 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147763.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:00:09,336 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.722e+01 1.488e+02 1.756e+02 2.293e+02 4.562e+02, threshold=3.512e+02, percent-clipped=2.0 2023-03-27 08:00:21,662 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8646, 1.6160, 2.0776, 1.9187, 1.7625, 1.7007, 1.9451, 1.9296], device='cuda:6'), covar=tensor([0.3614, 0.3362, 0.2660, 0.3444, 0.4017, 0.3566, 0.3810, 0.2601], device='cuda:6'), in_proj_covar=tensor([0.0266, 0.0248, 0.0268, 0.0296, 0.0295, 0.0271, 0.0302, 0.0252], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:00:24,543 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=147790.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:00:25,693 INFO [finetune.py:976] (6/7) Epoch 26, batch 4600, loss[loss=0.1472, simple_loss=0.2179, pruned_loss=0.03827, over 4786.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2441, pruned_loss=0.05045, over 949831.63 frames. ], batch size: 29, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:00:39,396 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.73 vs. limit=2.0 2023-03-27 08:00:39,939 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147815.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:00:58,624 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3384, 1.4062, 1.8209, 1.7119, 1.5144, 3.2298, 1.2715, 1.5302], device='cuda:6'), covar=tensor([0.1011, 0.1712, 0.1235, 0.0890, 0.1554, 0.0253, 0.1498, 0.1699], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 08:00:59,128 INFO [finetune.py:976] (6/7) Epoch 26, batch 4650, loss[loss=0.1587, simple_loss=0.2344, pruned_loss=0.04146, over 4907.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2416, pruned_loss=0.05004, over 949561.10 frames. ], batch size: 46, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:01:01,580 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1557, 2.0196, 1.6813, 1.9165, 1.9538, 1.9855, 1.9592, 2.6415], device='cuda:6'), covar=tensor([0.3656, 0.3934, 0.3472, 0.4097, 0.4064, 0.2647, 0.3714, 0.1750], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0238, 0.0277, 0.0261, 0.0230, 0.0259, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:01:05,132 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6922, 1.6162, 2.0563, 3.4659, 2.2954, 2.4003, 0.8736, 2.9252], device='cuda:6'), covar=tensor([0.1658, 0.1426, 0.1348, 0.0536, 0.0791, 0.1643, 0.1936, 0.0450], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0102, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:01:08,842 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147857.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:15,988 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.955e+01 1.458e+02 1.712e+02 2.175e+02 4.467e+02, threshold=3.424e+02, percent-clipped=3.0 2023-03-27 08:01:17,782 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2606, 2.0013, 2.2638, 1.6133, 2.2180, 2.3765, 2.2245, 1.6303], device='cuda:6'), covar=tensor([0.0518, 0.0719, 0.0631, 0.0767, 0.0644, 0.0542, 0.0596, 0.1237], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0136, 0.0141, 0.0120, 0.0127, 0.0138, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:01:21,840 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147876.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:39,631 INFO [finetune.py:976] (6/7) Epoch 26, batch 4700, loss[loss=0.1484, simple_loss=0.2203, pruned_loss=0.0382, over 4695.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2393, pruned_loss=0.04915, over 951316.61 frames. ], batch size: 23, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:01:46,497 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147897.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:55,417 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=147911.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:01:59,720 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147918.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:16,231 INFO [finetune.py:976] (6/7) Epoch 26, batch 4750, loss[loss=0.235, simple_loss=0.2848, pruned_loss=0.09262, over 4817.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2366, pruned_loss=0.04794, over 951495.95 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:02:19,085 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=147945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:32,361 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=147966.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:34,090 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.372e+01 1.470e+02 1.654e+02 2.049e+02 2.990e+02, threshold=3.309e+02, percent-clipped=0.0 2023-03-27 08:02:35,983 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=147972.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:02:39,571 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 08:02:50,080 INFO [finetune.py:976] (6/7) Epoch 26, batch 4800, loss[loss=0.1837, simple_loss=0.2678, pruned_loss=0.04983, over 4815.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2398, pruned_loss=0.0486, over 951182.25 frames. ], batch size: 41, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:03:00,508 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-27 08:03:06,226 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148014.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:03:06,846 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148015.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:03:24,535 INFO [finetune.py:976] (6/7) Epoch 26, batch 4850, loss[loss=0.1843, simple_loss=0.2468, pruned_loss=0.06094, over 4888.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2433, pruned_loss=0.04947, over 951969.85 frames. ], batch size: 32, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:03:39,249 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148063.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:03:42,781 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.530e+02 1.908e+02 2.333e+02 3.886e+02, threshold=3.817e+02, percent-clipped=4.0 2023-03-27 08:03:47,747 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2628, 3.7125, 3.9387, 4.0722, 4.0251, 3.8189, 4.3551, 1.4872], device='cuda:6'), covar=tensor([0.0877, 0.0870, 0.0809, 0.0962, 0.1273, 0.1619, 0.0758, 0.5982], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0249, 0.0284, 0.0297, 0.0340, 0.0288, 0.0307, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:04:04,151 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148090.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:04:05,252 INFO [finetune.py:976] (6/7) Epoch 26, batch 4900, loss[loss=0.1781, simple_loss=0.2536, pruned_loss=0.05135, over 4814.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2441, pruned_loss=0.04961, over 952634.73 frames. ], batch size: 40, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:04:30,820 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148111.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:04:42,534 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3199, 2.1161, 2.3065, 1.5697, 2.2551, 2.3494, 2.2406, 1.9039], device='cuda:6'), covar=tensor([0.0559, 0.0710, 0.0706, 0.0892, 0.0704, 0.0707, 0.0742, 0.1103], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0137, 0.0142, 0.0120, 0.0128, 0.0139, 0.0141, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:04:48,501 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3854, 1.3246, 1.2466, 1.3597, 1.6059, 1.5812, 1.3660, 1.2372], device='cuda:6'), covar=tensor([0.0340, 0.0296, 0.0611, 0.0300, 0.0235, 0.0385, 0.0342, 0.0418], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0147, 0.0111, 0.0101, 0.0115, 0.0104, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.8407e-05, 8.1219e-05, 1.1486e-04, 8.5068e-05, 7.8206e-05, 8.5037e-05, 7.6914e-05, 8.5427e-05], device='cuda:6') 2023-03-27 08:04:57,742 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148138.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:05:00,560 INFO [finetune.py:976] (6/7) Epoch 26, batch 4950, loss[loss=0.2103, simple_loss=0.2828, pruned_loss=0.06884, over 4839.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2449, pruned_loss=0.0497, over 952906.19 frames. ], batch size: 49, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:05:18,896 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.030e+01 1.572e+02 1.871e+02 2.257e+02 5.603e+02, threshold=3.742e+02, percent-clipped=1.0 2023-03-27 08:05:20,237 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:05:33,993 INFO [finetune.py:976] (6/7) Epoch 26, batch 5000, loss[loss=0.1597, simple_loss=0.2279, pruned_loss=0.04577, over 4887.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2432, pruned_loss=0.04905, over 954803.41 frames. ], batch size: 35, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:05:48,836 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:06:07,383 INFO [finetune.py:976] (6/7) Epoch 26, batch 5050, loss[loss=0.1634, simple_loss=0.2407, pruned_loss=0.04301, over 4914.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2403, pruned_loss=0.04814, over 955424.18 frames. ], batch size: 37, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:06:23,261 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8406, 3.2891, 3.5406, 3.6767, 3.6227, 3.3605, 3.8887, 1.2004], device='cuda:6'), covar=tensor([0.0870, 0.1049, 0.0997, 0.1048, 0.1351, 0.1747, 0.0967, 0.5983], device='cuda:6'), in_proj_covar=tensor([0.0356, 0.0251, 0.0285, 0.0299, 0.0340, 0.0289, 0.0308, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:06:25,053 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148267.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:06:26,159 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.490e+02 1.796e+02 2.082e+02 4.496e+02, threshold=3.592e+02, percent-clipped=3.0 2023-03-27 08:06:37,573 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148287.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:06:40,481 INFO [finetune.py:976] (6/7) Epoch 26, batch 5100, loss[loss=0.1693, simple_loss=0.233, pruned_loss=0.05283, over 4813.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2368, pruned_loss=0.04688, over 954377.46 frames. ], batch size: 38, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:06:46,516 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2739, 2.0642, 1.8916, 2.2811, 2.8168, 2.3147, 2.3997, 1.7871], device='cuda:6'), covar=tensor([0.2027, 0.1862, 0.1777, 0.1572, 0.1589, 0.1052, 0.1825, 0.1783], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0211, 0.0215, 0.0199, 0.0247, 0.0192, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:07:02,996 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148315.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:07:22,780 INFO [finetune.py:976] (6/7) Epoch 26, batch 5150, loss[loss=0.1926, simple_loss=0.2617, pruned_loss=0.06177, over 4874.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2376, pruned_loss=0.04714, over 954587.10 frames. ], batch size: 34, lr: 2.95e-03, grad_scale: 32.0 2023-03-27 08:07:23,527 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9889, 1.8176, 2.5424, 2.0190, 2.0866, 4.6191, 1.9470, 2.0412], device='cuda:6'), covar=tensor([0.0921, 0.1775, 0.1014, 0.0975, 0.1464, 0.0176, 0.1369, 0.1812], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:07:27,030 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148348.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:07:32,386 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1368, 1.4482, 0.8086, 2.0029, 2.4824, 1.8138, 1.7819, 2.0091], device='cuda:6'), covar=tensor([0.1340, 0.1811, 0.1925, 0.1038, 0.1702, 0.1816, 0.1277, 0.1899], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 08:07:35,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5747, 2.2768, 1.7871, 0.8881, 1.9132, 2.0152, 1.8153, 1.9917], device='cuda:6'), covar=tensor([0.0812, 0.0791, 0.1429, 0.1938, 0.1401, 0.2206, 0.2117, 0.0883], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0200, 0.0182, 0.0209, 0.0211, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:07:36,423 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6196, 1.5393, 1.4872, 1.5837, 1.2935, 3.5504, 1.3773, 1.7379], device='cuda:6'), covar=tensor([0.3329, 0.2518, 0.2189, 0.2349, 0.1720, 0.0201, 0.2661, 0.1307], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:07:36,970 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148363.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:07:41,443 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.549e+02 1.828e+02 2.233e+02 5.689e+02, threshold=3.657e+02, percent-clipped=4.0 2023-03-27 08:07:55,901 INFO [finetune.py:976] (6/7) Epoch 26, batch 5200, loss[loss=0.1863, simple_loss=0.254, pruned_loss=0.05929, over 4914.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2418, pruned_loss=0.04887, over 954088.57 frames. ], batch size: 37, lr: 2.95e-03, grad_scale: 16.0 2023-03-27 08:08:21,948 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148429.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:08:29,697 INFO [finetune.py:976] (6/7) Epoch 26, batch 5250, loss[loss=0.1693, simple_loss=0.2356, pruned_loss=0.05152, over 4918.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2439, pruned_loss=0.04889, over 952647.60 frames. ], batch size: 36, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:08:48,646 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.102e+02 1.514e+02 1.742e+02 2.193e+02 4.299e+02, threshold=3.484e+02, percent-clipped=1.0 2023-03-27 08:08:49,820 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:08:53,915 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0620, 1.9394, 1.6390, 0.9267, 1.6974, 1.7448, 1.6318, 1.8485], device='cuda:6'), covar=tensor([0.0781, 0.0569, 0.1069, 0.1367, 0.1014, 0.1632, 0.1676, 0.0658], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0199, 0.0182, 0.0209, 0.0211, 0.0223, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:09:02,318 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148490.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:03,427 INFO [finetune.py:976] (6/7) Epoch 26, batch 5300, loss[loss=0.1489, simple_loss=0.2158, pruned_loss=0.04098, over 4381.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2451, pruned_loss=0.0492, over 949790.84 frames. ], batch size: 19, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:09:07,117 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148498.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:08,532 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.19 vs. limit=2.0 2023-03-27 08:09:20,034 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148513.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:28,389 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148519.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:09:36,408 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5641, 1.5051, 1.9475, 2.9682, 2.0446, 2.2724, 0.8624, 2.6106], device='cuda:6'), covar=tensor([0.1804, 0.1401, 0.1313, 0.0669, 0.0864, 0.1336, 0.1974, 0.0521], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0163, 0.0101, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:09:54,004 INFO [finetune.py:976] (6/7) Epoch 26, batch 5350, loss[loss=0.188, simple_loss=0.2466, pruned_loss=0.06466, over 4926.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2459, pruned_loss=0.04939, over 951411.50 frames. ], batch size: 33, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:10:12,444 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148559.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:13,591 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148561.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:17,754 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148567.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:19,465 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.029e+02 1.412e+02 1.653e+02 1.941e+02 3.220e+02, threshold=3.306e+02, percent-clipped=0.0 2023-03-27 08:10:34,698 INFO [finetune.py:976] (6/7) Epoch 26, batch 5400, loss[loss=0.1791, simple_loss=0.2528, pruned_loss=0.05271, over 4779.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2446, pruned_loss=0.04992, over 952682.56 frames. ], batch size: 29, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:10:41,764 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.94 vs. limit=5.0 2023-03-27 08:10:49,744 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148615.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:10:53,907 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6729, 1.5098, 1.8760, 1.2280, 1.7182, 1.8265, 1.4193, 1.9666], device='cuda:6'), covar=tensor([0.1060, 0.2005, 0.1240, 0.1615, 0.0753, 0.1249, 0.2759, 0.0717], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0208, 0.0194, 0.0191, 0.0175, 0.0213, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:10:57,390 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5091, 1.5250, 1.6517, 0.9305, 1.7628, 1.9839, 1.9785, 1.4812], device='cuda:6'), covar=tensor([0.0884, 0.0747, 0.0540, 0.0550, 0.0474, 0.0619, 0.0347, 0.0748], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0149, 0.0129, 0.0123, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8917e-05, 1.0695e-04, 9.1667e-05, 8.6409e-05, 9.1924e-05, 9.1926e-05, 1.0133e-04, 1.0731e-04], device='cuda:6') 2023-03-27 08:11:07,907 INFO [finetune.py:976] (6/7) Epoch 26, batch 5450, loss[loss=0.1653, simple_loss=0.243, pruned_loss=0.0438, over 4904.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2416, pruned_loss=0.04933, over 952711.56 frames. ], batch size: 35, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:11:08,562 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:13,461 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148651.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:25,814 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.571e+02 1.855e+02 2.174e+02 4.125e+02, threshold=3.711e+02, percent-clipped=2.0 2023-03-27 08:11:41,099 INFO [finetune.py:976] (6/7) Epoch 26, batch 5500, loss[loss=0.1396, simple_loss=0.2112, pruned_loss=0.03401, over 4830.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2393, pruned_loss=0.049, over 951475.72 frames. ], batch size: 30, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:11:51,118 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 08:11:53,954 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148712.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:11:56,384 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=148716.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:12:24,627 INFO [finetune.py:976] (6/7) Epoch 26, batch 5550, loss[loss=0.1962, simple_loss=0.2759, pruned_loss=0.05828, over 4916.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2407, pruned_loss=0.04958, over 951737.02 frames. ], batch size: 38, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:12:25,372 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9880, 1.9903, 2.1558, 1.4212, 2.0594, 2.2190, 2.2036, 1.7371], device='cuda:6'), covar=tensor([0.0582, 0.0731, 0.0664, 0.0856, 0.0755, 0.0664, 0.0596, 0.1143], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0137, 0.0142, 0.0120, 0.0129, 0.0139, 0.0140, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:12:28,981 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8501, 1.7597, 1.5743, 2.0055, 2.4286, 1.9859, 1.8764, 1.5195], device='cuda:6'), covar=tensor([0.2030, 0.1771, 0.1810, 0.1474, 0.1504, 0.1080, 0.1901, 0.1771], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0209, 0.0213, 0.0197, 0.0244, 0.0190, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:12:31,747 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 08:12:33,856 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5935, 1.4601, 1.9921, 3.2613, 2.1414, 2.3856, 0.9255, 2.8079], device='cuda:6'), covar=tensor([0.1874, 0.1534, 0.1357, 0.0563, 0.0875, 0.1282, 0.1980, 0.0475], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0131, 0.0162, 0.0100, 0.0134, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:12:36,363 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1475, 1.7834, 2.4054, 1.6255, 2.1546, 2.4640, 1.7440, 2.4532], device='cuda:6'), covar=tensor([0.1193, 0.2186, 0.1576, 0.1999, 0.0946, 0.1420, 0.2845, 0.0858], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0207, 0.0194, 0.0190, 0.0175, 0.0214, 0.0217, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:12:42,334 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.664e+01 1.578e+02 1.917e+02 2.288e+02 4.413e+02, threshold=3.834e+02, percent-clipped=2.0 2023-03-27 08:12:47,526 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=148777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:12:52,460 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148785.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:12:56,507 INFO [finetune.py:976] (6/7) Epoch 26, batch 5600, loss[loss=0.1686, simple_loss=0.251, pruned_loss=0.04308, over 4893.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2444, pruned_loss=0.05038, over 950279.33 frames. ], batch size: 35, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:13:24,083 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8344, 1.2059, 1.9268, 1.8627, 1.6935, 1.6413, 1.7809, 1.8485], device='cuda:6'), covar=tensor([0.4434, 0.4462, 0.3519, 0.3748, 0.5153, 0.4124, 0.4728, 0.3259], device='cuda:6'), in_proj_covar=tensor([0.0265, 0.0246, 0.0266, 0.0294, 0.0294, 0.0270, 0.0299, 0.0251], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:13:25,298 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.50 vs. limit=2.0 2023-03-27 08:13:25,690 INFO [finetune.py:976] (6/7) Epoch 26, batch 5650, loss[loss=0.2057, simple_loss=0.2837, pruned_loss=0.06382, over 4807.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2467, pruned_loss=0.04973, over 953470.56 frames. ], batch size: 40, lr: 2.94e-03, grad_scale: 16.0 2023-03-27 08:13:32,821 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=148854.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:13:41,301 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1944, 1.1965, 1.3297, 0.6992, 1.2820, 1.4296, 1.4751, 1.2337], device='cuda:6'), covar=tensor([0.0819, 0.0703, 0.0583, 0.0474, 0.0525, 0.0672, 0.0386, 0.0694], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0149, 0.0129, 0.0123, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.9052e-05, 1.0679e-04, 9.1603e-05, 8.6459e-05, 9.1664e-05, 9.1967e-05, 1.0137e-04, 1.0740e-04], device='cuda:6') 2023-03-27 08:13:42,332 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.500e+02 1.770e+02 2.150e+02 4.859e+02, threshold=3.539e+02, percent-clipped=1.0 2023-03-27 08:13:55,305 INFO [finetune.py:976] (6/7) Epoch 26, batch 5700, loss[loss=0.1653, simple_loss=0.2268, pruned_loss=0.05195, over 4068.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2426, pruned_loss=0.04961, over 929107.05 frames. ], batch size: 17, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:14:05,503 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3790, 1.4576, 1.7506, 1.5017, 1.5506, 2.8237, 1.3881, 1.5331], device='cuda:6'), covar=tensor([0.0979, 0.1588, 0.1057, 0.0847, 0.1458, 0.0342, 0.1412, 0.1679], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:14:24,193 INFO [finetune.py:976] (6/7) Epoch 27, batch 0, loss[loss=0.1979, simple_loss=0.2642, pruned_loss=0.06581, over 4920.00 frames. ], tot_loss[loss=0.1979, simple_loss=0.2642, pruned_loss=0.06581, over 4920.00 frames. ], batch size: 33, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:14:24,193 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 08:14:29,670 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6800, 1.2220, 0.8969, 1.6724, 2.1829, 1.1297, 1.5784, 1.5460], device='cuda:6'), covar=tensor([0.1541, 0.2094, 0.1875, 0.1166, 0.1840, 0.2089, 0.1373, 0.2128], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0110, 0.0092, 0.0120, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 08:14:30,135 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3014, 1.3197, 1.1861, 1.3289, 1.5397, 1.5268, 1.3380, 1.2264], device='cuda:6'), covar=tensor([0.0449, 0.0299, 0.0595, 0.0292, 0.0272, 0.0381, 0.0300, 0.0379], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0105, 0.0146, 0.0110, 0.0100, 0.0114, 0.0103, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7685e-05, 8.0337e-05, 1.1349e-04, 8.4398e-05, 7.7821e-05, 8.4339e-05, 7.6199e-05, 8.4954e-05], device='cuda:6') 2023-03-27 08:14:40,695 INFO [finetune.py:1010] (6/7) Epoch 27, validation: loss=0.1593, simple_loss=0.2269, pruned_loss=0.04586, over 2265189.00 frames. 2023-03-27 08:14:40,695 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 08:14:57,035 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=148943.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:15:20,350 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 08:15:27,449 INFO [finetune.py:976] (6/7) Epoch 27, batch 50, loss[loss=0.1889, simple_loss=0.2706, pruned_loss=0.05361, over 4892.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2414, pruned_loss=0.04804, over 215592.46 frames. ], batch size: 35, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:15:28,074 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.253e+01 1.427e+02 1.731e+02 2.058e+02 3.661e+02, threshold=3.462e+02, percent-clipped=4.0 2023-03-27 08:15:35,633 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7370, 4.1527, 4.3148, 4.5224, 4.4587, 4.2651, 4.8169, 1.6303], device='cuda:6'), covar=tensor([0.0740, 0.0852, 0.0761, 0.0793, 0.1281, 0.1351, 0.0613, 0.5604], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0250, 0.0285, 0.0298, 0.0338, 0.0289, 0.0307, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:15:44,092 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=148991.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:15:54,011 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149007.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:16:03,651 INFO [finetune.py:976] (6/7) Epoch 27, batch 100, loss[loss=0.1817, simple_loss=0.2482, pruned_loss=0.05758, over 4783.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2408, pruned_loss=0.04898, over 380382.88 frames. ], batch size: 51, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:16:13,172 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4837, 1.3835, 1.5157, 0.8598, 1.5199, 1.4850, 1.5212, 1.3107], device='cuda:6'), covar=tensor([0.0567, 0.0799, 0.0658, 0.0925, 0.0908, 0.0702, 0.0590, 0.1303], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0136, 0.0141, 0.0119, 0.0128, 0.0139, 0.0140, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:16:23,434 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5520, 3.3975, 3.1872, 1.4008, 3.5204, 2.5550, 0.9032, 2.3969], device='cuda:6'), covar=tensor([0.2567, 0.1997, 0.1968, 0.3696, 0.1157, 0.1098, 0.4486, 0.1625], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0179, 0.0161, 0.0129, 0.0160, 0.0124, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 08:16:25,727 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.46 vs. limit=5.0 2023-03-27 08:16:36,430 INFO [finetune.py:976] (6/7) Epoch 27, batch 150, loss[loss=0.1233, simple_loss=0.2002, pruned_loss=0.0232, over 4805.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2371, pruned_loss=0.04844, over 510091.69 frames. ], batch size: 25, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:16:37,490 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.570e+01 1.451e+02 1.770e+02 2.054e+02 3.397e+02, threshold=3.539e+02, percent-clipped=0.0 2023-03-27 08:16:39,145 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149072.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:16:43,504 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.98 vs. limit=5.0 2023-03-27 08:16:47,476 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149085.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:17:09,498 INFO [finetune.py:976] (6/7) Epoch 27, batch 200, loss[loss=0.1194, simple_loss=0.2, pruned_loss=0.0194, over 4787.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2362, pruned_loss=0.04844, over 609600.00 frames. ], batch size: 29, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:17:19,404 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149133.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:17:20,058 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0220, 1.3485, 0.7887, 1.9558, 2.3439, 1.7667, 1.7081, 1.9088], device='cuda:6'), covar=tensor([0.1259, 0.1898, 0.1859, 0.1025, 0.1821, 0.1935, 0.1275, 0.1786], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0110, 0.0092, 0.0119, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 08:17:39,169 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149154.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:17:52,873 INFO [finetune.py:976] (6/7) Epoch 27, batch 250, loss[loss=0.2166, simple_loss=0.2882, pruned_loss=0.07252, over 4810.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2387, pruned_loss=0.04894, over 684489.55 frames. ], batch size: 40, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:17:53,480 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.507e+01 1.547e+02 1.763e+02 2.073e+02 3.560e+02, threshold=3.526e+02, percent-clipped=1.0 2023-03-27 08:17:58,737 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2894, 2.1456, 1.6622, 0.7591, 1.8541, 1.7965, 1.7052, 1.9769], device='cuda:6'), covar=tensor([0.0970, 0.0790, 0.1600, 0.2163, 0.1349, 0.2427, 0.2171, 0.0936], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0190, 0.0198, 0.0180, 0.0207, 0.0209, 0.0221, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:18:14,783 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149202.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:18:22,139 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8469, 1.7223, 1.4684, 1.3442, 1.5944, 1.6164, 1.6149, 2.1940], device='cuda:6'), covar=tensor([0.3537, 0.3386, 0.2948, 0.3309, 0.3560, 0.2139, 0.3186, 0.1663], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0235, 0.0275, 0.0259, 0.0229, 0.0258, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:18:25,585 INFO [finetune.py:976] (6/7) Epoch 27, batch 300, loss[loss=0.1525, simple_loss=0.2478, pruned_loss=0.0286, over 4812.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2417, pruned_loss=0.04917, over 744854.30 frames. ], batch size: 38, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:18:58,793 INFO [finetune.py:976] (6/7) Epoch 27, batch 350, loss[loss=0.1744, simple_loss=0.2436, pruned_loss=0.05259, over 4746.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2428, pruned_loss=0.04927, over 790904.40 frames. ], batch size: 54, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:18:59,215 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.30 vs. limit=5.0 2023-03-27 08:18:59,397 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.129e+02 1.601e+02 1.876e+02 2.140e+02 5.128e+02, threshold=3.753e+02, percent-clipped=1.0 2023-03-27 08:19:24,875 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149307.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:19:32,721 INFO [finetune.py:976] (6/7) Epoch 27, batch 400, loss[loss=0.1466, simple_loss=0.2239, pruned_loss=0.03461, over 4889.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2439, pruned_loss=0.0487, over 828626.72 frames. ], batch size: 43, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:19:40,701 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7280, 1.5967, 2.1035, 3.3044, 2.1809, 2.4066, 1.1864, 2.8004], device='cuda:6'), covar=tensor([0.1559, 0.1283, 0.1202, 0.0519, 0.0805, 0.1243, 0.1639, 0.0442], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0131, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:20:07,340 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149355.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:20:16,472 INFO [finetune.py:976] (6/7) Epoch 27, batch 450, loss[loss=0.1773, simple_loss=0.2545, pruned_loss=0.05011, over 4905.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2424, pruned_loss=0.04802, over 858577.92 frames. ], batch size: 37, lr: 2.94e-03, grad_scale: 8.0 2023-03-27 08:20:17,065 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.956e+01 1.496e+02 1.736e+02 2.126e+02 4.914e+02, threshold=3.471e+02, percent-clipped=1.0 2023-03-27 08:20:17,777 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=149372.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:20:58,684 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5995, 1.5571, 1.5116, 1.5592, 1.2703, 3.2607, 1.3019, 1.6508], device='cuda:6'), covar=tensor([0.3278, 0.2547, 0.2110, 0.2432, 0.1635, 0.0227, 0.2573, 0.1272], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:21:04,696 INFO [finetune.py:976] (6/7) Epoch 27, batch 500, loss[loss=0.1387, simple_loss=0.2117, pruned_loss=0.03288, over 4755.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.24, pruned_loss=0.0478, over 882310.72 frames. ], batch size: 27, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:21:04,764 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=149420.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:21:14,855 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.47 vs. limit=5.0 2023-03-27 08:21:38,465 INFO [finetune.py:976] (6/7) Epoch 27, batch 550, loss[loss=0.1369, simple_loss=0.2131, pruned_loss=0.03039, over 4781.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2372, pruned_loss=0.04698, over 897587.46 frames. ], batch size: 26, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:21:39,061 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.918e+01 1.466e+02 1.717e+02 2.125e+02 3.295e+02, threshold=3.435e+02, percent-clipped=0.0 2023-03-27 08:22:00,711 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4481, 1.4261, 1.2091, 1.3326, 1.7694, 1.5965, 1.4262, 1.2603], device='cuda:6'), covar=tensor([0.0378, 0.0359, 0.0611, 0.0362, 0.0221, 0.0645, 0.0352, 0.0431], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0147, 0.0111, 0.0101, 0.0115, 0.0104, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.8186e-05, 8.1142e-05, 1.1460e-04, 8.5118e-05, 7.8487e-05, 8.5123e-05, 7.7040e-05, 8.5467e-05], device='cuda:6') 2023-03-27 08:22:02,516 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2610, 1.4022, 0.8536, 2.0374, 2.6080, 1.8221, 1.8345, 2.0497], device='cuda:6'), covar=tensor([0.1343, 0.2006, 0.1962, 0.1104, 0.1695, 0.1834, 0.1342, 0.1962], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0111, 0.0092, 0.0121, 0.0094, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 08:22:08,657 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=149514.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:22:12,135 INFO [finetune.py:976] (6/7) Epoch 27, batch 600, loss[loss=0.1841, simple_loss=0.2563, pruned_loss=0.05596, over 4898.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2398, pruned_loss=0.04856, over 910858.67 frames. ], batch size: 37, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:22:20,572 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5459, 1.4959, 2.0010, 3.1729, 2.1500, 2.3091, 1.3195, 2.7294], device='cuda:6'), covar=tensor([0.1709, 0.1342, 0.1287, 0.0541, 0.0808, 0.1359, 0.1516, 0.0441], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:22:45,173 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=149565.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:22:48,083 INFO [finetune.py:976] (6/7) Epoch 27, batch 650, loss[loss=0.1837, simple_loss=0.2503, pruned_loss=0.05857, over 4830.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2428, pruned_loss=0.04975, over 920823.89 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:22:53,182 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.592e+02 1.975e+02 2.434e+02 4.045e+02, threshold=3.949e+02, percent-clipped=4.0 2023-03-27 08:22:55,819 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=149575.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:23:29,820 INFO [finetune.py:976] (6/7) Epoch 27, batch 700, loss[loss=0.1749, simple_loss=0.2364, pruned_loss=0.05676, over 4915.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2442, pruned_loss=0.04969, over 928551.30 frames. ], batch size: 38, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:23:33,661 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=149626.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:23:50,616 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 08:24:03,079 INFO [finetune.py:976] (6/7) Epoch 27, batch 750, loss[loss=0.1742, simple_loss=0.2551, pruned_loss=0.04665, over 4862.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2444, pruned_loss=0.04987, over 933095.05 frames. ], batch size: 34, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:24:03,698 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.520e+02 1.783e+02 2.094e+02 3.998e+02, threshold=3.567e+02, percent-clipped=1.0 2023-03-27 08:24:32,175 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4916, 2.1052, 2.8121, 1.7561, 2.3744, 2.7967, 1.9974, 2.8712], device='cuda:6'), covar=tensor([0.1419, 0.2172, 0.1494, 0.2277, 0.1024, 0.1444, 0.2817, 0.0838], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0206, 0.0193, 0.0190, 0.0175, 0.0212, 0.0217, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:24:36,880 INFO [finetune.py:976] (6/7) Epoch 27, batch 800, loss[loss=0.1932, simple_loss=0.2646, pruned_loss=0.06095, over 4894.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2448, pruned_loss=0.04946, over 938491.51 frames. ], batch size: 36, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:24:38,765 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3486, 1.4585, 1.8766, 1.6478, 1.5046, 3.2533, 1.2849, 1.5647], device='cuda:6'), covar=tensor([0.1097, 0.1706, 0.1387, 0.0970, 0.1552, 0.0241, 0.1484, 0.1690], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0090, 0.0080, 0.0085, 0.0079], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:24:54,934 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=149746.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 08:25:20,611 INFO [finetune.py:976] (6/7) Epoch 27, batch 850, loss[loss=0.1296, simple_loss=0.2035, pruned_loss=0.02784, over 4762.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2424, pruned_loss=0.04861, over 942042.74 frames. ], batch size: 27, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:25:21,209 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.021e+02 1.421e+02 1.714e+02 1.950e+02 4.580e+02, threshold=3.429e+02, percent-clipped=2.0 2023-03-27 08:25:22,558 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4166, 1.4511, 1.8755, 1.7153, 1.6208, 3.2736, 1.3239, 1.6144], device='cuda:6'), covar=tensor([0.1012, 0.1714, 0.1003, 0.0888, 0.1472, 0.0230, 0.1436, 0.1687], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0081, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:25:29,324 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.68 vs. limit=2.0 2023-03-27 08:25:47,303 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7032, 1.6782, 1.4286, 1.5629, 2.0334, 2.0390, 1.6761, 1.4660], device='cuda:6'), covar=tensor([0.0343, 0.0319, 0.0697, 0.0373, 0.0240, 0.0420, 0.0334, 0.0483], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0105, 0.0146, 0.0111, 0.0101, 0.0115, 0.0103, 0.0112], device='cuda:6'), out_proj_covar=tensor([7.7680e-05, 8.0618e-05, 1.1389e-04, 8.4658e-05, 7.8134e-05, 8.4801e-05, 7.6603e-05, 8.5240e-05], device='cuda:6') 2023-03-27 08:25:56,770 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=149807.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:26:09,033 INFO [finetune.py:976] (6/7) Epoch 27, batch 900, loss[loss=0.1608, simple_loss=0.2343, pruned_loss=0.0437, over 4798.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2394, pruned_loss=0.04756, over 947124.89 frames. ], batch size: 51, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:26:30,120 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8789, 1.2907, 1.9545, 1.9850, 1.7084, 1.6865, 1.8633, 1.8882], device='cuda:6'), covar=tensor([0.3639, 0.3800, 0.3120, 0.3349, 0.4697, 0.3546, 0.4084, 0.2844], device='cuda:6'), in_proj_covar=tensor([0.0267, 0.0249, 0.0269, 0.0297, 0.0297, 0.0273, 0.0302, 0.0254], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:26:42,230 INFO [finetune.py:976] (6/7) Epoch 27, batch 950, loss[loss=0.1634, simple_loss=0.2321, pruned_loss=0.04737, over 4783.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2386, pruned_loss=0.04795, over 950440.39 frames. ], batch size: 29, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:26:42,300 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149870.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:26:42,822 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.619e+01 1.503e+02 1.866e+02 2.296e+02 3.689e+02, threshold=3.732e+02, percent-clipped=3.0 2023-03-27 08:27:15,535 INFO [finetune.py:976] (6/7) Epoch 27, batch 1000, loss[loss=0.2139, simple_loss=0.265, pruned_loss=0.08139, over 4145.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2411, pruned_loss=0.04901, over 950464.54 frames. ], batch size: 65, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:27:16,194 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=149921.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:27:30,446 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.43 vs. limit=5.0 2023-03-27 08:27:48,833 INFO [finetune.py:976] (6/7) Epoch 27, batch 1050, loss[loss=0.192, simple_loss=0.253, pruned_loss=0.06551, over 4927.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2438, pruned_loss=0.04955, over 951708.32 frames. ], batch size: 33, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:27:49,415 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.561e+02 1.767e+02 2.240e+02 3.870e+02, threshold=3.534e+02, percent-clipped=1.0 2023-03-27 08:28:12,768 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 08:28:33,250 INFO [finetune.py:976] (6/7) Epoch 27, batch 1100, loss[loss=0.2074, simple_loss=0.2878, pruned_loss=0.06348, over 4838.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2457, pruned_loss=0.0502, over 950984.31 frames. ], batch size: 49, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:29:06,475 INFO [finetune.py:976] (6/7) Epoch 27, batch 1150, loss[loss=0.1933, simple_loss=0.2682, pruned_loss=0.05916, over 4918.00 frames. ], tot_loss[loss=0.1739, simple_loss=0.2466, pruned_loss=0.0506, over 952827.57 frames. ], batch size: 33, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:29:07,080 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.913e+01 1.470e+02 1.766e+02 2.217e+02 3.439e+02, threshold=3.531e+02, percent-clipped=0.0 2023-03-27 08:29:13,048 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150079.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:29:26,987 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150102.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 08:29:39,272 INFO [finetune.py:976] (6/7) Epoch 27, batch 1200, loss[loss=0.1687, simple_loss=0.2455, pruned_loss=0.04594, over 4816.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2454, pruned_loss=0.05035, over 954347.89 frames. ], batch size: 41, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:29:52,975 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9167, 1.6408, 1.5442, 1.5776, 2.0159, 2.1527, 1.7793, 1.6138], device='cuda:6'), covar=tensor([0.0387, 0.0457, 0.0721, 0.0399, 0.0311, 0.0453, 0.0382, 0.0481], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0147, 0.0111, 0.0101, 0.0116, 0.0104, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7952e-05, 8.1007e-05, 1.1431e-04, 8.5052e-05, 7.8425e-05, 8.5461e-05, 7.7114e-05, 8.5893e-05], device='cuda:6') 2023-03-27 08:29:52,982 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150140.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:30:14,458 INFO [finetune.py:976] (6/7) Epoch 27, batch 1250, loss[loss=0.1409, simple_loss=0.2091, pruned_loss=0.0363, over 4870.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2438, pruned_loss=0.05004, over 955611.71 frames. ], batch size: 31, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:30:15,042 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150170.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:30:15,536 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.012e+02 1.554e+02 1.886e+02 2.235e+02 6.588e+02, threshold=3.772e+02, percent-clipped=2.0 2023-03-27 08:30:56,393 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150218.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:30:57,557 INFO [finetune.py:976] (6/7) Epoch 27, batch 1300, loss[loss=0.154, simple_loss=0.2188, pruned_loss=0.04454, over 4789.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2396, pruned_loss=0.04851, over 955419.68 frames. ], batch size: 29, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:31:03,324 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:31:42,216 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150269.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:31:42,764 INFO [finetune.py:976] (6/7) Epoch 27, batch 1350, loss[loss=0.167, simple_loss=0.2429, pruned_loss=0.04554, over 4822.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2389, pruned_loss=0.04814, over 954008.87 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:31:42,911 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7974, 1.0682, 1.7939, 1.8590, 1.6411, 1.5598, 1.7390, 1.7666], device='cuda:6'), covar=tensor([0.3486, 0.3639, 0.2911, 0.3115, 0.4191, 0.3221, 0.3694, 0.2663], device='cuda:6'), in_proj_covar=tensor([0.0265, 0.0247, 0.0267, 0.0295, 0.0295, 0.0271, 0.0300, 0.0252], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:31:43,342 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.453e+02 1.768e+02 2.125e+02 3.830e+02, threshold=3.537e+02, percent-clipped=1.0 2023-03-27 08:31:45,339 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.08 vs. limit=5.0 2023-03-27 08:31:46,304 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150274.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:32:02,165 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5549, 1.4197, 1.6475, 1.6595, 1.5248, 2.9817, 1.3672, 1.4754], device='cuda:6'), covar=tensor([0.0972, 0.1958, 0.1159, 0.0971, 0.1694, 0.0304, 0.1656, 0.2005], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0090, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:32:10,663 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3595, 1.2894, 1.5592, 1.1397, 1.3838, 1.3956, 1.2687, 1.6201], device='cuda:6'), covar=tensor([0.1167, 0.2158, 0.1232, 0.1522, 0.0861, 0.1235, 0.3043, 0.0845], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0207, 0.0193, 0.0190, 0.0175, 0.0213, 0.0218, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:32:16,594 INFO [finetune.py:976] (6/7) Epoch 27, batch 1400, loss[loss=0.1695, simple_loss=0.2432, pruned_loss=0.04792, over 4173.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2419, pruned_loss=0.04902, over 953618.79 frames. ], batch size: 66, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:32:28,285 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:32:37,821 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1943, 1.3258, 1.4563, 1.3308, 1.4842, 2.4594, 1.2255, 1.4484], device='cuda:6'), covar=tensor([0.1069, 0.1939, 0.1108, 0.0973, 0.1709, 0.0358, 0.1650, 0.2000], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:32:49,841 INFO [finetune.py:976] (6/7) Epoch 27, batch 1450, loss[loss=0.1532, simple_loss=0.2294, pruned_loss=0.03855, over 4762.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2421, pruned_loss=0.04862, over 954137.89 frames. ], batch size: 28, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:32:50,437 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.045e+02 1.587e+02 1.925e+02 2.309e+02 4.827e+02, threshold=3.851e+02, percent-clipped=3.0 2023-03-27 08:32:59,295 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7367, 1.4635, 1.9312, 1.3816, 1.7467, 1.8328, 1.3894, 1.9899], device='cuda:6'), covar=tensor([0.1264, 0.2133, 0.1276, 0.1612, 0.0876, 0.1322, 0.3158, 0.0810], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0191, 0.0176, 0.0214, 0.0219, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:33:13,567 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150402.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:33:24,827 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6135, 1.2268, 0.8585, 1.4448, 2.0313, 1.3171, 1.5056, 1.4812], device='cuda:6'), covar=tensor([0.1565, 0.2044, 0.1859, 0.1234, 0.2009, 0.1967, 0.1373, 0.1916], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0119, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 08:33:26,948 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 08:33:29,003 INFO [finetune.py:976] (6/7) Epoch 27, batch 1500, loss[loss=0.1863, simple_loss=0.2662, pruned_loss=0.05318, over 4875.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.244, pruned_loss=0.04931, over 956408.03 frames. ], batch size: 34, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:33:42,990 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150435.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:33:53,601 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150450.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 08:33:56,650 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.88 vs. limit=2.0 2023-03-27 08:34:05,548 INFO [finetune.py:976] (6/7) Epoch 27, batch 1550, loss[loss=0.1615, simple_loss=0.2283, pruned_loss=0.04732, over 4818.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2444, pruned_loss=0.04923, over 954805.14 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:34:06,130 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.290e+01 1.580e+02 1.863e+02 2.206e+02 4.598e+02, threshold=3.727e+02, percent-clipped=2.0 2023-03-27 08:34:38,728 INFO [finetune.py:976] (6/7) Epoch 27, batch 1600, loss[loss=0.1546, simple_loss=0.2306, pruned_loss=0.03926, over 4834.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2423, pruned_loss=0.04884, over 955301.56 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:34:50,055 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150537.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:35:02,043 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.62 vs. limit=2.0 2023-03-27 08:35:11,519 INFO [finetune.py:976] (6/7) Epoch 27, batch 1650, loss[loss=0.1873, simple_loss=0.2425, pruned_loss=0.06603, over 4832.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2408, pruned_loss=0.0489, over 955853.53 frames. ], batch size: 30, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:35:12,132 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.130e+02 1.535e+02 1.741e+02 2.182e+02 5.670e+02, threshold=3.482e+02, percent-clipped=1.0 2023-03-27 08:35:37,663 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150598.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:35:49,584 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7986, 1.7225, 1.6783, 1.7724, 1.3528, 3.7744, 1.5761, 2.0580], device='cuda:6'), covar=tensor([0.3238, 0.2394, 0.2057, 0.2293, 0.1658, 0.0175, 0.2399, 0.1129], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0115, 0.0120, 0.0124, 0.0113, 0.0095, 0.0093, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0005, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:35:50,791 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1701, 1.2839, 1.3397, 0.6527, 1.2685, 1.5287, 1.5694, 1.3204], device='cuda:6'), covar=tensor([0.0935, 0.0562, 0.0516, 0.0517, 0.0514, 0.0599, 0.0316, 0.0658], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0129, 0.0123, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8818e-05, 1.0608e-04, 9.1557e-05, 8.6565e-05, 9.1821e-05, 9.1885e-05, 1.0091e-04, 1.0740e-04], device='cuda:6') 2023-03-27 08:35:52,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1061, 1.9396, 1.7677, 2.2220, 2.6819, 2.2409, 1.8289, 1.7281], device='cuda:6'), covar=tensor([0.2110, 0.1835, 0.1823, 0.1460, 0.1409, 0.1085, 0.2161, 0.1898], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0200, 0.0247, 0.0192, 0.0218, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:35:54,937 INFO [finetune.py:976] (6/7) Epoch 27, batch 1700, loss[loss=0.1961, simple_loss=0.2616, pruned_loss=0.06533, over 4734.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2394, pruned_loss=0.04825, over 956220.03 frames. ], batch size: 54, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:35:55,090 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1450, 2.0360, 1.6793, 1.7987, 1.9392, 1.9352, 1.9157, 2.6381], device='cuda:6'), covar=tensor([0.3557, 0.3558, 0.3014, 0.3455, 0.3520, 0.2308, 0.3294, 0.1623], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0237, 0.0277, 0.0260, 0.0230, 0.0260, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:36:00,874 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.73 vs. limit=2.0 2023-03-27 08:36:01,047 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150630.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:36:41,946 INFO [finetune.py:976] (6/7) Epoch 27, batch 1750, loss[loss=0.1736, simple_loss=0.2459, pruned_loss=0.05064, over 4862.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2411, pruned_loss=0.04924, over 954766.23 frames. ], batch size: 31, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:36:42,541 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.512e+01 1.530e+02 1.821e+02 2.198e+02 3.521e+02, threshold=3.642e+02, percent-clipped=1.0 2023-03-27 08:37:15,430 INFO [finetune.py:976] (6/7) Epoch 27, batch 1800, loss[loss=0.1275, simple_loss=0.1886, pruned_loss=0.03318, over 4405.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2425, pruned_loss=0.04898, over 955177.03 frames. ], batch size: 19, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:37:16,743 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3808, 3.7976, 4.0398, 4.2364, 4.1353, 3.8653, 4.4883, 1.3829], device='cuda:6'), covar=tensor([0.0772, 0.0839, 0.0868, 0.0978, 0.1230, 0.1806, 0.0625, 0.5945], device='cuda:6'), in_proj_covar=tensor([0.0356, 0.0251, 0.0286, 0.0300, 0.0339, 0.0292, 0.0310, 0.0306], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:37:27,753 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:37:33,298 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150735.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:37:35,932 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.30 vs. limit=2.0 2023-03-27 08:37:57,189 INFO [finetune.py:976] (6/7) Epoch 27, batch 1850, loss[loss=0.1547, simple_loss=0.2331, pruned_loss=0.03818, over 4770.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2431, pruned_loss=0.04935, over 955568.24 frames. ], batch size: 28, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:37:57,787 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.063e+02 1.537e+02 1.800e+02 2.248e+02 4.542e+02, threshold=3.600e+02, percent-clipped=6.0 2023-03-27 08:38:05,038 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150783.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:38:11,148 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150793.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 08:38:24,409 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5543, 1.4249, 1.9455, 2.9472, 2.0012, 2.2036, 1.0268, 2.5393], device='cuda:6'), covar=tensor([0.1668, 0.1354, 0.1145, 0.0571, 0.0792, 0.1492, 0.1688, 0.0478], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0131, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:38:30,228 INFO [finetune.py:976] (6/7) Epoch 27, batch 1900, loss[loss=0.1753, simple_loss=0.2619, pruned_loss=0.04433, over 4892.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2448, pruned_loss=0.05002, over 956385.14 frames. ], batch size: 43, lr: 2.93e-03, grad_scale: 8.0 2023-03-27 08:38:31,303 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 08:39:10,578 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7381, 1.0681, 1.7909, 1.7833, 1.6018, 1.5409, 1.7186, 1.7569], device='cuda:6'), covar=tensor([0.3278, 0.3263, 0.2638, 0.3031, 0.3757, 0.3197, 0.3308, 0.2503], device='cuda:6'), in_proj_covar=tensor([0.0265, 0.0248, 0.0268, 0.0296, 0.0295, 0.0272, 0.0301, 0.0252], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:39:14,070 INFO [finetune.py:976] (6/7) Epoch 27, batch 1950, loss[loss=0.1919, simple_loss=0.2443, pruned_loss=0.06975, over 4716.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2445, pruned_loss=0.05007, over 955018.44 frames. ], batch size: 59, lr: 2.92e-03, grad_scale: 8.0 2023-03-27 08:39:14,653 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.033e+02 1.460e+02 1.651e+02 1.933e+02 3.642e+02, threshold=3.302e+02, percent-clipped=1.0 2023-03-27 08:39:28,778 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=150893.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:39:33,130 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=150900.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:39:38,465 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.60 vs. limit=5.0 2023-03-27 08:39:47,873 INFO [finetune.py:976] (6/7) Epoch 27, batch 2000, loss[loss=0.136, simple_loss=0.2106, pruned_loss=0.03072, over 4829.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.242, pruned_loss=0.04969, over 953315.40 frames. ], batch size: 30, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:39:52,704 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.0640, 3.4697, 3.7172, 3.8616, 3.8025, 3.5762, 4.1255, 1.3332], device='cuda:6'), covar=tensor([0.0870, 0.0984, 0.0969, 0.1028, 0.1352, 0.1662, 0.0864, 0.5764], device='cuda:6'), in_proj_covar=tensor([0.0358, 0.0252, 0.0286, 0.0300, 0.0341, 0.0291, 0.0311, 0.0307], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:39:54,518 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=150930.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:40:15,361 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=150961.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:40:21,604 INFO [finetune.py:976] (6/7) Epoch 27, batch 2050, loss[loss=0.1452, simple_loss=0.2215, pruned_loss=0.03445, over 4824.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2381, pruned_loss=0.04777, over 954861.91 frames. ], batch size: 38, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:40:22,192 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.304e+01 1.432e+02 1.658e+02 2.071e+02 3.830e+02, threshold=3.317e+02, percent-clipped=1.0 2023-03-27 08:40:27,052 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=150978.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:40:36,081 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5581, 1.6319, 1.3726, 1.6651, 1.9413, 1.9573, 1.6750, 1.4527], device='cuda:6'), covar=tensor([0.0406, 0.0347, 0.0669, 0.0307, 0.0275, 0.0447, 0.0341, 0.0443], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0147, 0.0112, 0.0102, 0.0116, 0.0104, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.8170e-05, 8.1373e-05, 1.1476e-04, 8.5401e-05, 7.8716e-05, 8.5244e-05, 7.7071e-05, 8.6410e-05], device='cuda:6') 2023-03-27 08:40:46,592 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.18 vs. limit=2.0 2023-03-27 08:40:56,343 INFO [finetune.py:976] (6/7) Epoch 27, batch 2100, loss[loss=0.1904, simple_loss=0.2632, pruned_loss=0.05886, over 4908.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2375, pruned_loss=0.04738, over 954527.10 frames. ], batch size: 36, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:41:47,142 INFO [finetune.py:976] (6/7) Epoch 27, batch 2150, loss[loss=0.1301, simple_loss=0.2006, pruned_loss=0.02986, over 4683.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2403, pruned_loss=0.04817, over 953775.90 frames. ], batch size: 23, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:41:48,292 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.039e+02 1.525e+02 1.813e+02 2.166e+02 3.448e+02, threshold=3.626e+02, percent-clipped=1.0 2023-03-27 08:42:02,501 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151088.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 08:42:23,614 INFO [finetune.py:976] (6/7) Epoch 27, batch 2200, loss[loss=0.1876, simple_loss=0.2623, pruned_loss=0.0564, over 4716.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2435, pruned_loss=0.04935, over 952736.28 frames. ], batch size: 59, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:42:41,015 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 08:42:55,498 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.5127, 1.4709, 1.4209, 0.8846, 1.5280, 1.7501, 1.7531, 1.3180], device='cuda:6'), covar=tensor([0.0814, 0.0525, 0.0528, 0.0499, 0.0450, 0.0509, 0.0291, 0.0635], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0147, 0.0129, 0.0123, 0.0132, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8865e-05, 1.0567e-04, 9.1786e-05, 8.6270e-05, 9.2258e-05, 9.1999e-05, 1.0091e-04, 1.0767e-04], device='cuda:6') 2023-03-27 08:43:04,289 INFO [finetune.py:976] (6/7) Epoch 27, batch 2250, loss[loss=0.147, simple_loss=0.2252, pruned_loss=0.03442, over 4824.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2455, pruned_loss=0.05021, over 952851.23 frames. ], batch size: 47, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:43:04,889 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.120e+01 1.457e+02 1.754e+02 2.221e+02 3.820e+02, threshold=3.509e+02, percent-clipped=1.0 2023-03-27 08:43:14,785 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151184.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:43:20,814 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151193.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:43:37,562 INFO [finetune.py:976] (6/7) Epoch 27, batch 2300, loss[loss=0.1941, simple_loss=0.2723, pruned_loss=0.05794, over 4803.00 frames. ], tot_loss[loss=0.1725, simple_loss=0.2452, pruned_loss=0.04993, over 953142.07 frames. ], batch size: 40, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:43:42,369 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.56 vs. limit=2.0 2023-03-27 08:43:47,598 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5525, 1.0689, 0.7993, 1.3453, 1.9771, 0.7274, 1.2971, 1.3317], device='cuda:6'), covar=tensor([0.1529, 0.2105, 0.1663, 0.1210, 0.1933, 0.1884, 0.1468, 0.1989], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 08:43:51,736 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151241.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:43:56,856 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151245.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:06,982 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151256.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:08,274 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4508, 2.3626, 2.0751, 2.5334, 2.2981, 2.2852, 2.2722, 3.2644], device='cuda:6'), covar=tensor([0.3622, 0.4989, 0.3292, 0.4137, 0.4063, 0.2522, 0.4395, 0.1553], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0276, 0.0260, 0.0230, 0.0258, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:44:18,930 INFO [finetune.py:976] (6/7) Epoch 27, batch 2350, loss[loss=0.1152, simple_loss=0.195, pruned_loss=0.01764, over 4788.00 frames. ], tot_loss[loss=0.17, simple_loss=0.242, pruned_loss=0.04897, over 952554.85 frames. ], batch size: 29, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:44:19,866 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-27 08:44:19,965 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.208e+01 1.503e+02 1.827e+02 2.189e+02 3.264e+02, threshold=3.653e+02, percent-clipped=0.0 2023-03-27 08:44:28,759 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:39,436 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151298.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:44:44,328 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4934, 2.4713, 2.2076, 2.5344, 2.3352, 2.3553, 2.3477, 3.2602], device='cuda:6'), covar=tensor([0.3595, 0.4798, 0.3030, 0.3930, 0.4195, 0.2457, 0.4204, 0.1495], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0276, 0.0260, 0.0230, 0.0258, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:44:52,096 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5456, 1.4800, 1.9661, 1.8024, 1.5863, 3.2743, 1.3326, 1.6049], device='cuda:6'), covar=tensor([0.0940, 0.1711, 0.1200, 0.0891, 0.1517, 0.0241, 0.1509, 0.1698], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 08:44:52,608 INFO [finetune.py:976] (6/7) Epoch 27, batch 2400, loss[loss=0.1653, simple_loss=0.2311, pruned_loss=0.04975, over 4823.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2399, pruned_loss=0.04847, over 952209.29 frames. ], batch size: 51, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:45:09,235 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151343.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:45:19,415 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151359.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:45:26,014 INFO [finetune.py:976] (6/7) Epoch 27, batch 2450, loss[loss=0.1882, simple_loss=0.2449, pruned_loss=0.06577, over 4868.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2379, pruned_loss=0.04834, over 950358.37 frames. ], batch size: 34, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:45:26,603 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.415e+02 1.689e+02 1.968e+02 4.441e+02, threshold=3.378e+02, percent-clipped=1.0 2023-03-27 08:45:38,470 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151388.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:45:58,928 INFO [finetune.py:976] (6/7) Epoch 27, batch 2500, loss[loss=0.2114, simple_loss=0.2733, pruned_loss=0.07479, over 4102.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2401, pruned_loss=0.0494, over 952837.07 frames. ], batch size: 65, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:46:12,827 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151436.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:46:30,317 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3262, 2.9633, 3.1090, 3.2617, 3.0880, 2.8467, 3.3531, 0.9840], device='cuda:6'), covar=tensor([0.1108, 0.1129, 0.1137, 0.1211, 0.1682, 0.2120, 0.1187, 0.5802], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0248, 0.0283, 0.0297, 0.0335, 0.0289, 0.0307, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:46:36,992 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-27 08:46:47,869 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.36 vs. limit=5.0 2023-03-27 08:46:48,697 INFO [finetune.py:976] (6/7) Epoch 27, batch 2550, loss[loss=0.1534, simple_loss=0.2358, pruned_loss=0.03553, over 4803.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2445, pruned_loss=0.05055, over 954395.81 frames. ], batch size: 51, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:46:49,280 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.209e+01 1.472e+02 1.881e+02 2.470e+02 3.912e+02, threshold=3.762e+02, percent-clipped=2.0 2023-03-27 08:47:05,059 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.81 vs. limit=5.0 2023-03-27 08:47:24,842 INFO [finetune.py:976] (6/7) Epoch 27, batch 2600, loss[loss=0.1579, simple_loss=0.2352, pruned_loss=0.04026, over 4729.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.245, pruned_loss=0.05065, over 955096.31 frames. ], batch size: 54, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:47:42,186 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151540.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:48:03,460 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151556.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:48:16,179 INFO [finetune.py:976] (6/7) Epoch 27, batch 2650, loss[loss=0.1823, simple_loss=0.261, pruned_loss=0.05177, over 4889.00 frames. ], tot_loss[loss=0.1738, simple_loss=0.2466, pruned_loss=0.05053, over 955072.69 frames. ], batch size: 35, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:48:16,785 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.199e+02 1.649e+02 1.887e+02 2.270e+02 4.456e+02, threshold=3.774e+02, percent-clipped=3.0 2023-03-27 08:48:39,919 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151604.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:48:49,534 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 08:48:49,948 INFO [finetune.py:976] (6/7) Epoch 27, batch 2700, loss[loss=0.1605, simple_loss=0.2339, pruned_loss=0.04351, over 4857.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2457, pruned_loss=0.05009, over 956727.40 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:49:01,309 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151638.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:49:09,369 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2060, 2.2798, 1.8565, 2.3438, 2.1092, 2.1516, 2.1465, 3.0032], device='cuda:6'), covar=tensor([0.3771, 0.4450, 0.3608, 0.4100, 0.4532, 0.2476, 0.4263, 0.1659], device='cuda:6'), in_proj_covar=tensor([0.0288, 0.0263, 0.0236, 0.0274, 0.0259, 0.0228, 0.0257, 0.0236], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:49:12,824 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=151654.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:49:29,777 INFO [finetune.py:976] (6/7) Epoch 27, batch 2750, loss[loss=0.1262, simple_loss=0.204, pruned_loss=0.02419, over 4745.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2436, pruned_loss=0.04967, over 958131.39 frames. ], batch size: 27, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:49:30,372 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.940e+01 1.418e+02 1.693e+02 2.178e+02 3.976e+02, threshold=3.385e+02, percent-clipped=1.0 2023-03-27 08:49:39,355 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 08:49:44,724 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151688.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:49:56,287 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2328, 1.9833, 2.4548, 1.7458, 2.1541, 2.5453, 1.9234, 2.5993], device='cuda:6'), covar=tensor([0.1275, 0.1983, 0.1354, 0.1760, 0.0985, 0.1084, 0.2668, 0.0762], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0206, 0.0192, 0.0189, 0.0174, 0.0212, 0.0217, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:50:06,332 INFO [finetune.py:976] (6/7) Epoch 27, batch 2800, loss[loss=0.1349, simple_loss=0.207, pruned_loss=0.03138, over 4769.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.24, pruned_loss=0.04842, over 952563.93 frames. ], batch size: 28, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:50:25,000 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151749.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:50:27,873 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0958, 1.7001, 2.3679, 1.5531, 2.1279, 2.2315, 1.5763, 2.4721], device='cuda:6'), covar=tensor([0.1237, 0.1950, 0.1281, 0.1894, 0.0859, 0.1337, 0.2911, 0.0784], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0206, 0.0192, 0.0189, 0.0175, 0.0213, 0.0217, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:50:39,488 INFO [finetune.py:976] (6/7) Epoch 27, batch 2850, loss[loss=0.1635, simple_loss=0.2399, pruned_loss=0.0435, over 4872.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2381, pruned_loss=0.04734, over 954044.85 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:50:39,617 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5424, 1.3735, 1.3293, 1.4894, 1.6747, 1.6287, 1.4289, 1.3241], device='cuda:6'), covar=tensor([0.0345, 0.0308, 0.0635, 0.0309, 0.0244, 0.0440, 0.0291, 0.0397], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0106, 0.0146, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7445e-05, 8.0914e-05, 1.1383e-04, 8.4677e-05, 7.8179e-05, 8.4841e-05, 7.6200e-05, 8.5593e-05], device='cuda:6') 2023-03-27 08:50:40,098 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.092e+02 1.485e+02 1.795e+02 2.169e+02 3.375e+02, threshold=3.589e+02, percent-clipped=0.0 2023-03-27 08:50:56,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.0778, 1.3411, 1.5515, 1.3372, 1.4659, 2.5546, 1.1810, 1.4231], device='cuda:6'), covar=tensor([0.1186, 0.2346, 0.1019, 0.1000, 0.1863, 0.0408, 0.1973, 0.2330], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 08:50:59,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=151801.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:51:12,915 INFO [finetune.py:976] (6/7) Epoch 27, batch 2900, loss[loss=0.1904, simple_loss=0.2661, pruned_loss=0.05735, over 4854.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2401, pruned_loss=0.04791, over 952863.12 frames. ], batch size: 44, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:51:13,042 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4182, 1.2940, 1.2532, 1.3363, 1.6170, 1.5304, 1.4015, 1.2070], device='cuda:6'), covar=tensor([0.0336, 0.0294, 0.0599, 0.0306, 0.0211, 0.0426, 0.0274, 0.0424], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0147, 0.0111, 0.0101, 0.0116, 0.0104, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.7862e-05, 8.1200e-05, 1.1450e-04, 8.5021e-05, 7.8593e-05, 8.5312e-05, 7.6773e-05, 8.6076e-05], device='cuda:6') 2023-03-27 08:51:24,069 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 08:51:25,554 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151840.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:51:54,447 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=151862.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:52:04,136 INFO [finetune.py:976] (6/7) Epoch 27, batch 2950, loss[loss=0.1698, simple_loss=0.2398, pruned_loss=0.04991, over 4871.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2439, pruned_loss=0.0496, over 954966.73 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:52:04,750 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.132e+02 1.531e+02 1.876e+02 2.281e+02 4.815e+02, threshold=3.752e+02, percent-clipped=2.0 2023-03-27 08:52:11,490 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2048, 2.3181, 1.5485, 2.5575, 2.1995, 1.8284, 3.0553, 2.3025], device='cuda:6'), covar=tensor([0.1366, 0.1927, 0.3304, 0.2722, 0.2639, 0.1676, 0.2459, 0.1782], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0235, 0.0252, 0.0248, 0.0206, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:52:15,643 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151888.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:52:30,091 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1377, 2.2654, 1.8763, 2.2470, 2.1940, 2.1071, 2.1029, 2.9332], device='cuda:6'), covar=tensor([0.3988, 0.4533, 0.3539, 0.4127, 0.4221, 0.2607, 0.4548, 0.1788], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0237, 0.0275, 0.0260, 0.0229, 0.0259, 0.0238], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:52:37,408 INFO [finetune.py:976] (6/7) Epoch 27, batch 3000, loss[loss=0.1555, simple_loss=0.2372, pruned_loss=0.03689, over 4870.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2447, pruned_loss=0.04953, over 954323.11 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:52:37,408 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 08:52:50,759 INFO [finetune.py:1010] (6/7) Epoch 27, validation: loss=0.1572, simple_loss=0.2248, pruned_loss=0.04486, over 2265189.00 frames. 2023-03-27 08:52:50,759 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 08:53:01,836 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:53:13,670 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.48 vs. limit=5.0 2023-03-27 08:53:14,179 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=151954.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:53:32,164 INFO [finetune.py:976] (6/7) Epoch 27, batch 3050, loss[loss=0.1694, simple_loss=0.24, pruned_loss=0.04941, over 4816.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2452, pruned_loss=0.04945, over 956844.83 frames. ], batch size: 39, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:53:32,748 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.022e+02 1.546e+02 1.837e+02 2.199e+02 4.500e+02, threshold=3.674e+02, percent-clipped=2.0 2023-03-27 08:53:32,856 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1644, 3.6422, 3.8172, 4.0375, 3.8905, 3.6087, 4.2402, 1.3460], device='cuda:6'), covar=tensor([0.0760, 0.0821, 0.0798, 0.0845, 0.1275, 0.1602, 0.0679, 0.5682], device='cuda:6'), in_proj_covar=tensor([0.0352, 0.0248, 0.0281, 0.0296, 0.0333, 0.0287, 0.0305, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:53:44,360 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=151986.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:53:55,893 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152002.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:54:06,691 INFO [finetune.py:976] (6/7) Epoch 27, batch 3100, loss[loss=0.1702, simple_loss=0.2368, pruned_loss=0.05177, over 4916.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2427, pruned_loss=0.04802, over 955986.39 frames. ], batch size: 36, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:54:23,668 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152044.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:54:41,693 INFO [finetune.py:976] (6/7) Epoch 27, batch 3150, loss[loss=0.1273, simple_loss=0.2048, pruned_loss=0.02492, over 4827.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2398, pruned_loss=0.04709, over 954665.72 frames. ], batch size: 39, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:54:42,286 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.945e+01 1.491e+02 1.827e+02 2.202e+02 3.039e+02, threshold=3.654e+02, percent-clipped=0.0 2023-03-27 08:54:50,499 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 08:55:10,778 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.70 vs. limit=5.0 2023-03-27 08:55:21,807 INFO [finetune.py:976] (6/7) Epoch 27, batch 3200, loss[loss=0.168, simple_loss=0.2352, pruned_loss=0.0504, over 4935.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2367, pruned_loss=0.04643, over 956885.38 frames. ], batch size: 38, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:55:24,944 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9332, 1.3809, 0.8188, 1.6011, 2.1856, 1.4948, 1.5995, 1.7412], device='cuda:6'), covar=tensor([0.1381, 0.1869, 0.1678, 0.1202, 0.1716, 0.1777, 0.1293, 0.1853], device='cuda:6'), in_proj_covar=tensor([0.0088, 0.0092, 0.0107, 0.0090, 0.0117, 0.0091, 0.0096, 0.0086], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 08:55:46,798 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152157.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:55:54,657 INFO [finetune.py:976] (6/7) Epoch 27, batch 3250, loss[loss=0.1683, simple_loss=0.2501, pruned_loss=0.04322, over 4829.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2369, pruned_loss=0.04714, over 955088.09 frames. ], batch size: 39, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:55:55,264 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.795e+01 1.453e+02 1.756e+02 2.073e+02 3.538e+02, threshold=3.512e+02, percent-clipped=0.0 2023-03-27 08:56:32,279 INFO [finetune.py:976] (6/7) Epoch 27, batch 3300, loss[loss=0.197, simple_loss=0.2719, pruned_loss=0.06103, over 4858.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2402, pruned_loss=0.04804, over 955684.91 frames. ], batch size: 31, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:56:35,417 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152225.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:57:13,838 INFO [finetune.py:976] (6/7) Epoch 27, batch 3350, loss[loss=0.1565, simple_loss=0.2372, pruned_loss=0.03788, over 4828.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.242, pruned_loss=0.04869, over 955703.69 frames. ], batch size: 33, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:57:14,396 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.581e+01 1.606e+02 1.884e+02 2.337e+02 3.345e+02, threshold=3.768e+02, percent-clipped=0.0 2023-03-27 08:57:20,884 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152273.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:57:33,592 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:57:42,742 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.41 vs. limit=5.0 2023-03-27 08:57:49,939 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-27 08:58:01,023 INFO [finetune.py:976] (6/7) Epoch 27, batch 3400, loss[loss=0.1954, simple_loss=0.2617, pruned_loss=0.06453, over 4280.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2432, pruned_loss=0.04933, over 954374.77 frames. ], batch size: 65, lr: 2.92e-03, grad_scale: 16.0 2023-03-27 08:58:09,669 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152334.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:58:16,694 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152344.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:58:23,116 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7433, 1.5824, 1.5459, 1.6204, 2.0044, 2.0826, 1.7256, 1.5218], device='cuda:6'), covar=tensor([0.0336, 0.0376, 0.0587, 0.0344, 0.0247, 0.0394, 0.0362, 0.0412], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0107, 0.0149, 0.0112, 0.0102, 0.0116, 0.0104, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.8814e-05, 8.1887e-05, 1.1574e-04, 8.5855e-05, 7.9044e-05, 8.5787e-05, 7.7166e-05, 8.6914e-05], device='cuda:6') 2023-03-27 08:58:36,171 INFO [finetune.py:976] (6/7) Epoch 27, batch 3450, loss[loss=0.1554, simple_loss=0.2314, pruned_loss=0.03973, over 4871.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2421, pruned_loss=0.04809, over 953198.34 frames. ], batch size: 34, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 08:58:36,743 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.916e+01 1.467e+02 1.787e+02 2.252e+02 4.149e+02, threshold=3.573e+02, percent-clipped=3.0 2023-03-27 08:58:58,956 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152392.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:59:18,831 INFO [finetune.py:976] (6/7) Epoch 27, batch 3500, loss[loss=0.2089, simple_loss=0.2608, pruned_loss=0.07848, over 3964.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2396, pruned_loss=0.04737, over 953656.38 frames. ], batch size: 17, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 08:59:34,568 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:59:43,766 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152457.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 08:59:45,660 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1471, 2.0019, 1.7259, 1.8516, 1.8825, 1.9107, 1.9977, 2.5949], device='cuda:6'), covar=tensor([0.3638, 0.4174, 0.3350, 0.3645, 0.3971, 0.2394, 0.3392, 0.1767], device='cuda:6'), in_proj_covar=tensor([0.0292, 0.0265, 0.0239, 0.0278, 0.0262, 0.0231, 0.0260, 0.0241], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:59:49,094 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 08:59:52,114 INFO [finetune.py:976] (6/7) Epoch 27, batch 3550, loss[loss=0.1402, simple_loss=0.2172, pruned_loss=0.03165, over 4823.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.237, pruned_loss=0.04637, over 954624.19 frames. ], batch size: 39, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 08:59:52,236 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7540, 1.6289, 1.9573, 1.3631, 1.7681, 1.9787, 1.5908, 2.1516], device='cuda:6'), covar=tensor([0.1185, 0.2243, 0.1349, 0.1539, 0.0938, 0.1208, 0.2872, 0.0782], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0190, 0.0175, 0.0214, 0.0218, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 08:59:52,707 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.528e+01 1.381e+02 1.664e+02 2.040e+02 3.997e+02, threshold=3.328e+02, percent-clipped=1.0 2023-03-27 08:59:56,460 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.0864, 4.4569, 4.6264, 4.8971, 4.8567, 4.5378, 5.2291, 1.7001], device='cuda:6'), covar=tensor([0.0685, 0.0725, 0.0715, 0.0885, 0.1014, 0.1511, 0.0417, 0.5389], device='cuda:6'), in_proj_covar=tensor([0.0348, 0.0244, 0.0279, 0.0292, 0.0329, 0.0284, 0.0301, 0.0298], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:00:03,455 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-27 09:00:21,802 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.60 vs. limit=5.0 2023-03-27 09:00:25,614 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152505.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:00:26,812 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152506.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:00:36,269 INFO [finetune.py:976] (6/7) Epoch 27, batch 3600, loss[loss=0.1586, simple_loss=0.2283, pruned_loss=0.04441, over 4829.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2362, pruned_loss=0.04648, over 953976.98 frames. ], batch size: 33, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:01:10,218 INFO [finetune.py:976] (6/7) Epoch 27, batch 3650, loss[loss=0.1164, simple_loss=0.1914, pruned_loss=0.02069, over 4767.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2389, pruned_loss=0.04753, over 951588.71 frames. ], batch size: 26, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:01:10,829 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.982e+01 1.594e+02 1.907e+02 2.265e+02 4.160e+02, threshold=3.814e+02, percent-clipped=3.0 2023-03-27 09:01:17,562 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152581.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:01:18,191 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152582.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:01:22,464 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152589.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:01:44,759 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2103, 1.9013, 1.9838, 0.9019, 2.2841, 2.5330, 2.1974, 1.7911], device='cuda:6'), covar=tensor([0.0827, 0.0676, 0.0562, 0.0682, 0.0516, 0.0590, 0.0401, 0.0714], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0123, 0.0132, 0.0131, 0.0142, 0.0151], device='cuda:6'), out_proj_covar=tensor([8.8968e-05, 1.0576e-04, 9.2812e-05, 8.6143e-05, 9.2234e-05, 9.2483e-05, 1.0092e-04, 1.0778e-04], device='cuda:6') 2023-03-27 09:01:46,439 INFO [finetune.py:976] (6/7) Epoch 27, batch 3700, loss[loss=0.1714, simple_loss=0.2504, pruned_loss=0.04616, over 4886.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2414, pruned_loss=0.04814, over 949140.89 frames. ], batch size: 32, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:01:50,269 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8541, 1.8095, 1.5499, 1.9314, 2.2939, 1.9945, 1.7185, 1.5035], device='cuda:6'), covar=tensor([0.2133, 0.1823, 0.1893, 0.1616, 0.1616, 0.1136, 0.2270, 0.1895], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0212, 0.0216, 0.0201, 0.0247, 0.0192, 0.0220, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:01:52,533 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152629.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:02:01,171 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152643.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:02:05,486 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152650.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:02:22,200 INFO [finetune.py:976] (6/7) Epoch 27, batch 3750, loss[loss=0.1938, simple_loss=0.2704, pruned_loss=0.05859, over 4858.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2439, pruned_loss=0.04906, over 952372.42 frames. ], batch size: 44, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:02:22,799 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.491e+02 1.751e+02 2.166e+02 4.226e+02, threshold=3.502e+02, percent-clipped=3.0 2023-03-27 09:02:31,671 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4733, 1.3849, 1.3702, 1.3901, 0.8592, 2.3625, 0.7974, 1.3409], device='cuda:6'), covar=tensor([0.3523, 0.2541, 0.2328, 0.2639, 0.1975, 0.0335, 0.2742, 0.1312], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 09:03:12,604 INFO [finetune.py:976] (6/7) Epoch 27, batch 3800, loss[loss=0.1821, simple_loss=0.2551, pruned_loss=0.05461, over 4912.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2443, pruned_loss=0.04943, over 951855.32 frames. ], batch size: 37, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:03:13,233 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7075, 1.5142, 1.1536, 0.3142, 1.2547, 1.5085, 1.4916, 1.4195], device='cuda:6'), covar=tensor([0.0889, 0.0862, 0.1338, 0.1998, 0.1397, 0.2361, 0.2367, 0.0846], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0191, 0.0201, 0.0182, 0.0209, 0.0211, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:03:13,927 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 09:03:15,419 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.83 vs. limit=2.0 2023-03-27 09:03:16,874 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=152726.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:03:39,163 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9548, 1.7312, 2.2210, 1.5042, 1.9363, 2.2094, 1.6166, 2.3059], device='cuda:6'), covar=tensor([0.1123, 0.1765, 0.1269, 0.1796, 0.0786, 0.1255, 0.2420, 0.0727], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0190, 0.0175, 0.0214, 0.0218, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:03:45,581 INFO [finetune.py:976] (6/7) Epoch 27, batch 3850, loss[loss=0.1829, simple_loss=0.2493, pruned_loss=0.05825, over 4925.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2438, pruned_loss=0.04909, over 953183.44 frames. ], batch size: 38, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:03:46,653 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.206e+01 1.335e+02 1.631e+02 2.144e+02 3.589e+02, threshold=3.262e+02, percent-clipped=1.0 2023-03-27 09:03:55,041 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6756, 1.6107, 1.4470, 1.7291, 2.3108, 1.7797, 1.7395, 1.3806], device='cuda:6'), covar=tensor([0.2482, 0.2207, 0.2241, 0.1983, 0.1702, 0.1536, 0.2445, 0.2151], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0210, 0.0214, 0.0199, 0.0245, 0.0190, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:04:00,012 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=152787.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:04:12,776 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152801.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:04:26,501 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1263, 1.9982, 1.8029, 1.9374, 1.8788, 1.8661, 1.9237, 2.6171], device='cuda:6'), covar=tensor([0.3525, 0.3871, 0.3007, 0.3484, 0.3762, 0.2428, 0.3409, 0.1685], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0235, 0.0274, 0.0259, 0.0229, 0.0257, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:04:28,201 INFO [finetune.py:976] (6/7) Epoch 27, batch 3900, loss[loss=0.1145, simple_loss=0.1851, pruned_loss=0.02197, over 4754.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2414, pruned_loss=0.04863, over 953532.27 frames. ], batch size: 54, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:05:01,433 INFO [finetune.py:976] (6/7) Epoch 27, batch 3950, loss[loss=0.1419, simple_loss=0.2121, pruned_loss=0.03584, over 4824.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2376, pruned_loss=0.04716, over 955499.34 frames. ], batch size: 38, lr: 2.91e-03, grad_scale: 16.0 2023-03-27 09:05:02,041 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.044e+02 1.440e+02 1.686e+02 2.039e+02 3.105e+02, threshold=3.372e+02, percent-clipped=0.0 2023-03-27 09:05:09,733 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152881.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:05:10,981 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2307, 1.8887, 2.8296, 4.2416, 2.9001, 2.8371, 1.0483, 3.5760], device='cuda:6'), covar=tensor([0.1623, 0.1307, 0.1202, 0.0403, 0.0737, 0.1462, 0.1949, 0.0361], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:05:14,980 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.37 vs. limit=2.0 2023-03-27 09:05:26,830 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 09:05:43,211 INFO [finetune.py:976] (6/7) Epoch 27, batch 4000, loss[loss=0.2027, simple_loss=0.2736, pruned_loss=0.06594, over 4826.00 frames. ], tot_loss[loss=0.1661, simple_loss=0.2378, pruned_loss=0.04718, over 955096.40 frames. ], batch size: 33, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:05:43,984 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1026, 2.1817, 1.7597, 2.0589, 2.0810, 2.0116, 2.1492, 2.7276], device='cuda:6'), covar=tensor([0.4132, 0.4185, 0.3265, 0.4000, 0.3888, 0.2624, 0.3602, 0.1869], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0275, 0.0260, 0.0230, 0.0258, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:05:49,735 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:05:49,762 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=152929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:05:56,137 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152938.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:06:00,334 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=152945.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:06:06,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3101, 2.0368, 2.6761, 1.6748, 2.1717, 2.5672, 1.8124, 2.6066], device='cuda:6'), covar=tensor([0.1338, 0.1964, 0.1560, 0.2085, 0.1098, 0.1396, 0.2621, 0.0898], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0209, 0.0195, 0.0191, 0.0176, 0.0215, 0.0219, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:06:16,491 INFO [finetune.py:976] (6/7) Epoch 27, batch 4050, loss[loss=0.168, simple_loss=0.2457, pruned_loss=0.04514, over 4768.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2401, pruned_loss=0.04763, over 955502.38 frames. ], batch size: 28, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:06:17,094 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.469e+02 1.767e+02 2.180e+02 3.425e+02, threshold=3.534e+02, percent-clipped=1.0 2023-03-27 09:06:20,836 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=152977.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:06:28,911 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8507, 3.3969, 3.5548, 3.4595, 3.4868, 3.2604, 3.9283, 1.7389], device='cuda:6'), covar=tensor([0.1224, 0.1653, 0.1748, 0.2066, 0.1694, 0.2243, 0.1257, 0.6691], device='cuda:6'), in_proj_covar=tensor([0.0355, 0.0249, 0.0285, 0.0297, 0.0335, 0.0289, 0.0307, 0.0305], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:06:30,801 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9818, 2.6828, 2.4103, 1.2366, 2.4850, 2.2339, 2.0494, 2.4345], device='cuda:6'), covar=tensor([0.0709, 0.0827, 0.1377, 0.1971, 0.1192, 0.1884, 0.1996, 0.0909], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0201, 0.0182, 0.0209, 0.0211, 0.0225, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:06:35,818 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 09:06:49,253 INFO [finetune.py:976] (6/7) Epoch 27, batch 4100, loss[loss=0.1751, simple_loss=0.2365, pruned_loss=0.05681, over 4871.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2428, pruned_loss=0.04855, over 956088.42 frames. ], batch size: 31, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:07:22,843 INFO [finetune.py:976] (6/7) Epoch 27, batch 4150, loss[loss=0.2286, simple_loss=0.2914, pruned_loss=0.08287, over 4913.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2435, pruned_loss=0.04835, over 957444.77 frames. ], batch size: 36, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:07:23,442 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.058e+02 1.644e+02 1.926e+02 2.373e+02 3.999e+02, threshold=3.851e+02, percent-clipped=3.0 2023-03-27 09:07:31,216 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153082.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:07:51,109 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153101.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:08:07,044 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153117.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:08:13,345 INFO [finetune.py:976] (6/7) Epoch 27, batch 4200, loss[loss=0.1713, simple_loss=0.2437, pruned_loss=0.04944, over 4856.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2441, pruned_loss=0.04839, over 957374.14 frames. ], batch size: 31, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:08:22,496 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 09:08:36,710 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153149.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:08:49,826 INFO [finetune.py:976] (6/7) Epoch 27, batch 4250, loss[loss=0.1539, simple_loss=0.2241, pruned_loss=0.0418, over 4866.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2429, pruned_loss=0.04814, over 957417.81 frames. ], batch size: 31, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:08:50,415 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.682e+01 1.570e+02 1.909e+02 2.227e+02 3.978e+02, threshold=3.818e+02, percent-clipped=1.0 2023-03-27 09:08:54,791 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153178.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:09:33,254 INFO [finetune.py:976] (6/7) Epoch 27, batch 4300, loss[loss=0.1397, simple_loss=0.2098, pruned_loss=0.03475, over 4845.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2412, pruned_loss=0.04815, over 957515.38 frames. ], batch size: 47, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:09:44,691 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153238.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:09:49,805 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153245.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:06,729 INFO [finetune.py:976] (6/7) Epoch 27, batch 4350, loss[loss=0.1369, simple_loss=0.2128, pruned_loss=0.03048, over 4820.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2378, pruned_loss=0.04701, over 957683.09 frames. ], batch size: 40, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:10:07,329 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.429e+01 1.453e+02 1.753e+02 2.112e+02 4.699e+02, threshold=3.507e+02, percent-clipped=1.0 2023-03-27 09:10:11,668 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153278.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:16,465 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:19,598 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.80 vs. limit=5.0 2023-03-27 09:10:21,203 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153293.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:10:37,837 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.53 vs. limit=2.0 2023-03-27 09:10:40,548 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5778, 1.4757, 2.0476, 3.1708, 2.1467, 2.2911, 0.9955, 2.7054], device='cuda:6'), covar=tensor([0.1688, 0.1375, 0.1178, 0.0581, 0.0798, 0.1395, 0.1770, 0.0504], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0133, 0.0164, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:10:41,653 INFO [finetune.py:976] (6/7) Epoch 27, batch 4400, loss[loss=0.1742, simple_loss=0.2382, pruned_loss=0.05511, over 4900.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2398, pruned_loss=0.04819, over 958821.00 frames. ], batch size: 35, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:10:45,573 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 09:11:01,732 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153339.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:11:04,166 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4618, 1.5228, 1.2965, 1.5091, 1.7943, 1.6912, 1.5424, 1.3150], device='cuda:6'), covar=tensor([0.0367, 0.0273, 0.0599, 0.0280, 0.0210, 0.0426, 0.0303, 0.0403], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0148, 0.0112, 0.0102, 0.0116, 0.0104, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.8494e-05, 8.1382e-05, 1.1553e-04, 8.5741e-05, 7.8732e-05, 8.5657e-05, 7.7309e-05, 8.6235e-05], device='cuda:6') 2023-03-27 09:11:23,047 INFO [finetune.py:976] (6/7) Epoch 27, batch 4450, loss[loss=0.1791, simple_loss=0.2532, pruned_loss=0.05251, over 4917.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2441, pruned_loss=0.04906, over 959199.16 frames. ], batch size: 37, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:11:23,636 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.494e+02 1.793e+02 2.132e+02 3.020e+02, threshold=3.586e+02, percent-clipped=0.0 2023-03-27 09:11:26,051 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9661, 2.1590, 1.2896, 2.6756, 3.1944, 2.4210, 2.5819, 2.5478], device='cuda:6'), covar=tensor([0.1077, 0.1629, 0.1761, 0.0867, 0.1328, 0.1441, 0.1013, 0.1546], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0109, 0.0091, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 09:11:27,253 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6116, 3.4734, 3.3945, 1.6717, 3.6509, 2.7943, 1.0828, 2.4604], device='cuda:6'), covar=tensor([0.2814, 0.1934, 0.1451, 0.3452, 0.1072, 0.0989, 0.3979, 0.1579], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0180, 0.0160, 0.0131, 0.0162, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 09:11:30,928 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153382.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:11:31,015 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.95 vs. limit=5.0 2023-03-27 09:11:56,789 INFO [finetune.py:976] (6/7) Epoch 27, batch 4500, loss[loss=0.2515, simple_loss=0.304, pruned_loss=0.09948, over 4847.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2457, pruned_loss=0.04991, over 956754.79 frames. ], batch size: 47, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:11:57,457 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153421.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:03,376 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153430.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:27,702 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4835, 2.3866, 1.9692, 2.5118, 2.3670, 2.1125, 2.7740, 2.5104], device='cuda:6'), covar=tensor([0.1294, 0.1956, 0.3004, 0.2331, 0.2584, 0.1731, 0.2513, 0.1642], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0191, 0.0236, 0.0254, 0.0249, 0.0206, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:12:29,936 INFO [finetune.py:976] (6/7) Epoch 27, batch 4550, loss[loss=0.1516, simple_loss=0.2214, pruned_loss=0.04087, over 4867.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2471, pruned_loss=0.0501, over 956580.93 frames. ], batch size: 34, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:12:30,513 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.518e+01 1.590e+02 1.867e+02 2.229e+02 3.919e+02, threshold=3.734e+02, percent-clipped=1.0 2023-03-27 09:12:31,775 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153473.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:37,683 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153482.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:12:38,876 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9783, 2.6868, 2.5189, 1.2966, 2.6815, 2.1094, 2.1044, 2.4728], device='cuda:6'), covar=tensor([0.1228, 0.0886, 0.1813, 0.2325, 0.1657, 0.2297, 0.2212, 0.1207], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0192, 0.0201, 0.0182, 0.0211, 0.0211, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:12:53,359 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.70 vs. limit=2.0 2023-03-27 09:13:14,340 INFO [finetune.py:976] (6/7) Epoch 27, batch 4600, loss[loss=0.1781, simple_loss=0.2376, pruned_loss=0.05934, over 4889.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.246, pruned_loss=0.04988, over 955903.52 frames. ], batch size: 32, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:13:37,096 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9075, 3.4084, 3.6041, 3.7996, 3.6981, 3.4182, 3.9621, 1.1367], device='cuda:6'), covar=tensor([0.0832, 0.0904, 0.0956, 0.0968, 0.1314, 0.1695, 0.0842, 0.5906], device='cuda:6'), in_proj_covar=tensor([0.0352, 0.0248, 0.0283, 0.0294, 0.0333, 0.0287, 0.0305, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:13:51,379 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-27 09:13:56,978 INFO [finetune.py:976] (6/7) Epoch 27, batch 4650, loss[loss=0.1581, simple_loss=0.2272, pruned_loss=0.04447, over 4902.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2432, pruned_loss=0.04945, over 954899.08 frames. ], batch size: 43, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:13:57,586 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.461e+02 1.737e+02 2.165e+02 3.643e+02, threshold=3.475e+02, percent-clipped=0.0 2023-03-27 09:14:31,814 INFO [finetune.py:976] (6/7) Epoch 27, batch 4700, loss[loss=0.2055, simple_loss=0.2551, pruned_loss=0.07799, over 4872.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.24, pruned_loss=0.04877, over 956402.17 frames. ], batch size: 34, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:14:47,944 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153634.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:14:57,624 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1240, 1.9612, 1.7035, 1.7890, 1.8624, 1.8965, 1.8728, 2.5902], device='cuda:6'), covar=tensor([0.3610, 0.3980, 0.3413, 0.3666, 0.3778, 0.2420, 0.3633, 0.1655], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0238, 0.0277, 0.0262, 0.0231, 0.0260, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:15:12,023 INFO [finetune.py:976] (6/7) Epoch 27, batch 4750, loss[loss=0.2016, simple_loss=0.2748, pruned_loss=0.06417, over 4914.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2385, pruned_loss=0.04788, over 956946.61 frames. ], batch size: 32, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:15:13,078 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.473e+02 1.795e+02 2.173e+02 4.465e+02, threshold=3.590e+02, percent-clipped=3.0 2023-03-27 09:15:29,987 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6432, 1.0985, 0.8717, 1.4371, 2.0258, 1.1282, 1.3471, 1.4775], device='cuda:6'), covar=tensor([0.1583, 0.2133, 0.1894, 0.1265, 0.2015, 0.1939, 0.1502, 0.2052], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0110, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 09:15:45,873 INFO [finetune.py:976] (6/7) Epoch 27, batch 4800, loss[loss=0.1872, simple_loss=0.2773, pruned_loss=0.04856, over 4931.00 frames. ], tot_loss[loss=0.1692, simple_loss=0.2407, pruned_loss=0.04879, over 957227.52 frames. ], batch size: 42, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:16:28,411 INFO [finetune.py:976] (6/7) Epoch 27, batch 4850, loss[loss=0.1998, simple_loss=0.2759, pruned_loss=0.06184, over 4895.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2427, pruned_loss=0.04871, over 954313.24 frames. ], batch size: 35, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:16:28,977 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.100e+02 1.544e+02 1.777e+02 2.223e+02 4.381e+02, threshold=3.554e+02, percent-clipped=2.0 2023-03-27 09:16:30,746 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153773.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:16:33,646 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=153777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:16:34,310 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3870, 2.3750, 1.8777, 2.2941, 2.2744, 1.9786, 2.6717, 2.4178], device='cuda:6'), covar=tensor([0.1434, 0.1893, 0.2994, 0.2600, 0.2648, 0.1860, 0.2719, 0.1627], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0191, 0.0237, 0.0255, 0.0251, 0.0208, 0.0217, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:16:57,594 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 09:17:00,917 INFO [finetune.py:976] (6/7) Epoch 27, batch 4900, loss[loss=0.2159, simple_loss=0.2805, pruned_loss=0.07566, over 4839.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2448, pruned_loss=0.05004, over 952586.71 frames. ], batch size: 47, lr: 2.91e-03, grad_scale: 32.0 2023-03-27 09:17:02,072 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153821.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:17:34,599 INFO [finetune.py:976] (6/7) Epoch 27, batch 4950, loss[loss=0.1825, simple_loss=0.2564, pruned_loss=0.05429, over 4817.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2448, pruned_loss=0.0499, over 952326.18 frames. ], batch size: 33, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:17:35,192 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.496e+02 1.750e+02 2.158e+02 3.393e+02, threshold=3.501e+02, percent-clipped=0.0 2023-03-27 09:17:51,377 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.85 vs. limit=5.0 2023-03-27 09:17:53,714 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-27 09:18:10,031 INFO [finetune.py:976] (6/7) Epoch 27, batch 5000, loss[loss=0.1367, simple_loss=0.2002, pruned_loss=0.0366, over 4726.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2437, pruned_loss=0.04997, over 952009.64 frames. ], batch size: 23, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:18:17,246 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=153923.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:18:28,733 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=153934.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:18:46,116 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2016, 1.2400, 1.2400, 0.7108, 1.2560, 1.4041, 1.5090, 1.1962], device='cuda:6'), covar=tensor([0.0790, 0.0582, 0.0533, 0.0449, 0.0539, 0.0572, 0.0293, 0.0592], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0130, 0.0123, 0.0132, 0.0131, 0.0143, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8647e-05, 1.0579e-04, 9.2178e-05, 8.6033e-05, 9.2166e-05, 9.2555e-05, 1.0143e-04, 1.0750e-04], device='cuda:6') 2023-03-27 09:19:01,714 INFO [finetune.py:976] (6/7) Epoch 27, batch 5050, loss[loss=0.1712, simple_loss=0.2398, pruned_loss=0.05134, over 4898.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2404, pruned_loss=0.04833, over 952523.04 frames. ], batch size: 37, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:19:02,312 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.210e+01 1.435e+02 1.808e+02 2.168e+02 4.775e+02, threshold=3.617e+02, percent-clipped=1.0 2023-03-27 09:19:05,463 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 09:19:10,569 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=153982.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:19:11,854 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=153984.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:19:29,658 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1473, 1.7262, 2.4491, 1.6361, 2.1652, 2.3052, 1.6174, 2.4165], device='cuda:6'), covar=tensor([0.1184, 0.1850, 0.1281, 0.1751, 0.0772, 0.1257, 0.2749, 0.0739], device='cuda:6'), in_proj_covar=tensor([0.0194, 0.0209, 0.0196, 0.0191, 0.0176, 0.0216, 0.0219, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:19:36,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7032, 1.3807, 0.8010, 1.6969, 2.1141, 1.4923, 1.6213, 1.6397], device='cuda:6'), covar=tensor([0.1474, 0.1879, 0.1935, 0.1092, 0.1889, 0.1791, 0.1292, 0.1813], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 09:19:36,571 INFO [finetune.py:976] (6/7) Epoch 27, batch 5100, loss[loss=0.1649, simple_loss=0.2366, pruned_loss=0.04665, over 4823.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2375, pruned_loss=0.04704, over 954505.29 frames. ], batch size: 41, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:19:49,568 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1455, 1.4066, 1.4385, 0.7606, 1.3907, 1.6458, 1.6983, 1.3517], device='cuda:6'), covar=tensor([0.0816, 0.0568, 0.0537, 0.0458, 0.0511, 0.0544, 0.0320, 0.0661], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0130, 0.0123, 0.0132, 0.0131, 0.0143, 0.0151], device='cuda:6'), out_proj_covar=tensor([8.8535e-05, 1.0573e-04, 9.2189e-05, 8.6125e-05, 9.2255e-05, 9.2720e-05, 1.0127e-04, 1.0764e-04], device='cuda:6') 2023-03-27 09:20:06,451 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154049.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:20:19,664 INFO [finetune.py:976] (6/7) Epoch 27, batch 5150, loss[loss=0.1692, simple_loss=0.2533, pruned_loss=0.04254, over 4815.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2375, pruned_loss=0.04674, over 954714.08 frames. ], batch size: 40, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:20:20,251 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.883e+01 1.500e+02 1.789e+02 2.109e+02 4.792e+02, threshold=3.578e+02, percent-clipped=3.0 2023-03-27 09:20:20,977 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5278, 1.3664, 1.9398, 2.9270, 1.9206, 2.2372, 0.9791, 2.4941], device='cuda:6'), covar=tensor([0.1665, 0.1398, 0.1174, 0.0597, 0.0887, 0.1293, 0.1745, 0.0496], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0116, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:20:22,217 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.3145, 4.5660, 4.8858, 5.1981, 5.0781, 4.7993, 5.4437, 1.6968], device='cuda:6'), covar=tensor([0.0742, 0.0893, 0.0725, 0.0796, 0.1231, 0.1660, 0.0609, 0.5819], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0246, 0.0282, 0.0293, 0.0332, 0.0285, 0.0304, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:20:23,511 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154076.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:20:24,076 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154077.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:20:28,828 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154084.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:20:46,985 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154110.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:20:49,989 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0828, 1.9703, 2.1228, 1.4732, 2.0216, 2.1718, 2.1173, 1.7485], device='cuda:6'), covar=tensor([0.0532, 0.0625, 0.0646, 0.0825, 0.0727, 0.0632, 0.0615, 0.1035], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0136, 0.0140, 0.0119, 0.0128, 0.0138, 0.0139, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:20:53,422 INFO [finetune.py:976] (6/7) Epoch 27, batch 5200, loss[loss=0.1636, simple_loss=0.2417, pruned_loss=0.04275, over 4831.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2402, pruned_loss=0.04733, over 953006.55 frames. ], batch size: 33, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:20:56,441 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154125.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:21:03,738 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0268, 1.5699, 2.3252, 1.6071, 2.0792, 2.3204, 1.5039, 2.3873], device='cuda:6'), covar=tensor([0.1279, 0.2332, 0.1344, 0.1863, 0.0940, 0.1265, 0.3143, 0.0812], device='cuda:6'), in_proj_covar=tensor([0.0195, 0.0211, 0.0197, 0.0192, 0.0177, 0.0217, 0.0221, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:21:04,807 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154137.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:21:12,350 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154145.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:21:33,075 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 09:21:34,724 INFO [finetune.py:976] (6/7) Epoch 27, batch 5250, loss[loss=0.1968, simple_loss=0.2661, pruned_loss=0.06376, over 4887.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2437, pruned_loss=0.04839, over 956212.22 frames. ], batch size: 32, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:21:35,330 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.556e+02 1.889e+02 2.346e+02 3.556e+02, threshold=3.778e+02, percent-clipped=0.0 2023-03-27 09:22:08,469 INFO [finetune.py:976] (6/7) Epoch 27, batch 5300, loss[loss=0.1842, simple_loss=0.25, pruned_loss=0.05918, over 4751.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2457, pruned_loss=0.04935, over 951076.48 frames. ], batch size: 54, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:22:09,186 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154221.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:22:23,391 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9485, 1.8954, 1.6774, 2.1372, 2.4960, 2.1523, 1.8459, 1.6087], device='cuda:6'), covar=tensor([0.2355, 0.2033, 0.1901, 0.1704, 0.1594, 0.1193, 0.2208, 0.1956], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0211, 0.0216, 0.0200, 0.0246, 0.0191, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:22:41,917 INFO [finetune.py:976] (6/7) Epoch 27, batch 5350, loss[loss=0.1397, simple_loss=0.2174, pruned_loss=0.03098, over 4815.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2455, pruned_loss=0.04922, over 952045.73 frames. ], batch size: 47, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:22:42,509 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.999e+01 1.513e+02 1.798e+02 2.139e+02 3.270e+02, threshold=3.596e+02, percent-clipped=0.0 2023-03-27 09:22:47,881 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154279.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:22:49,752 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154282.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:22:52,175 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154286.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:22:55,617 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154291.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:23:09,647 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 09:23:15,340 INFO [finetune.py:976] (6/7) Epoch 27, batch 5400, loss[loss=0.154, simple_loss=0.2345, pruned_loss=0.03676, over 4753.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2433, pruned_loss=0.04917, over 949865.79 frames. ], batch size: 27, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:23:33,037 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154337.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:23:43,276 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154347.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:23:49,141 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154352.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:24:08,870 INFO [finetune.py:976] (6/7) Epoch 27, batch 5450, loss[loss=0.1438, simple_loss=0.2299, pruned_loss=0.02884, over 4913.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2393, pruned_loss=0.04791, over 951242.91 frames. ], batch size: 37, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:24:09,463 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.752e+01 1.439e+02 1.730e+02 2.063e+02 4.741e+02, threshold=3.460e+02, percent-clipped=1.0 2023-03-27 09:24:24,627 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2918, 2.2211, 1.7794, 2.1554, 2.2224, 1.9771, 2.5238, 2.2923], device='cuda:6'), covar=tensor([0.1265, 0.1843, 0.2934, 0.2458, 0.2399, 0.1644, 0.2831, 0.1586], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0253, 0.0249, 0.0206, 0.0214, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:24:26,431 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154398.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:24:28,339 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.21 vs. limit=5.0 2023-03-27 09:24:32,229 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154405.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:24:42,542 INFO [finetune.py:976] (6/7) Epoch 27, batch 5500, loss[loss=0.1513, simple_loss=0.2219, pruned_loss=0.04037, over 4747.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.2359, pruned_loss=0.04686, over 950099.95 frames. ], batch size: 27, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:24:49,896 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154432.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:24:54,784 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154440.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:25:27,011 INFO [finetune.py:976] (6/7) Epoch 27, batch 5550, loss[loss=0.2305, simple_loss=0.3092, pruned_loss=0.07585, over 4836.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2374, pruned_loss=0.04687, over 951876.39 frames. ], batch size: 47, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:25:27,596 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.282e+01 1.501e+02 1.802e+02 2.038e+02 5.335e+02, threshold=3.603e+02, percent-clipped=2.0 2023-03-27 09:25:29,567 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154474.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:25:57,467 INFO [finetune.py:976] (6/7) Epoch 27, batch 5600, loss[loss=0.1624, simple_loss=0.24, pruned_loss=0.04235, over 4688.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2411, pruned_loss=0.04792, over 949624.90 frames. ], batch size: 23, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:26:07,315 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154535.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:26:14,537 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.65 vs. limit=5.0 2023-03-27 09:26:30,304 INFO [finetune.py:976] (6/7) Epoch 27, batch 5650, loss[loss=0.262, simple_loss=0.3162, pruned_loss=0.1039, over 4062.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2445, pruned_loss=0.04889, over 947426.95 frames. ], batch size: 65, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:26:30,862 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.301e+01 1.398e+02 1.740e+02 2.119e+02 4.723e+02, threshold=3.480e+02, percent-clipped=2.0 2023-03-27 09:26:39,111 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154577.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:26:40,304 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154579.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:07,855 INFO [finetune.py:976] (6/7) Epoch 27, batch 5700, loss[loss=0.1319, simple_loss=0.1919, pruned_loss=0.03593, over 4469.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2401, pruned_loss=0.0486, over 925104.52 frames. ], batch size: 19, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:27:12,037 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154627.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:12,704 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154628.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:20,352 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9675, 1.8313, 2.4349, 3.6508, 2.5098, 2.8607, 1.5565, 2.9933], device='cuda:6'), covar=tensor([0.1942, 0.1730, 0.1450, 0.0810, 0.0920, 0.1251, 0.1869, 0.0611], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:27:20,866 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154642.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:22,051 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7230, 2.6634, 3.2131, 4.5853, 3.3398, 3.4219, 1.7875, 3.9303], device='cuda:6'), covar=tensor([0.1485, 0.1119, 0.1161, 0.0556, 0.0664, 0.0990, 0.1698, 0.0370], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:27:34,184 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154647.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:27:34,737 INFO [finetune.py:976] (6/7) Epoch 28, batch 0, loss[loss=0.1506, simple_loss=0.2281, pruned_loss=0.03659, over 4716.00 frames. ], tot_loss[loss=0.1506, simple_loss=0.2281, pruned_loss=0.03659, over 4716.00 frames. ], batch size: 54, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:27:34,738 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 09:27:44,945 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.2298, 1.3896, 1.3879, 0.7625, 1.4000, 1.5736, 1.6400, 1.3183], device='cuda:6'), covar=tensor([0.0925, 0.0646, 0.0629, 0.0528, 0.0558, 0.0644, 0.0364, 0.0789], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0129, 0.0121, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8337e-05, 1.0504e-04, 9.1824e-05, 8.5222e-05, 9.1862e-05, 9.1862e-05, 1.0083e-04, 1.0708e-04], device='cuda:6') 2023-03-27 09:27:54,285 INFO [finetune.py:1010] (6/7) Epoch 28, validation: loss=0.1583, simple_loss=0.2265, pruned_loss=0.04511, over 2265189.00 frames. 2023-03-27 09:27:54,286 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 09:27:55,412 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7702, 1.2761, 0.9134, 1.6759, 2.1318, 1.5532, 1.5553, 1.6728], device='cuda:6'), covar=tensor([0.1372, 0.1874, 0.1713, 0.1100, 0.1813, 0.1791, 0.1289, 0.1728], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0091, 0.0119, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 09:27:59,657 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5726, 1.3903, 2.0626, 3.0120, 1.9870, 2.4088, 1.1202, 2.6217], device='cuda:6'), covar=tensor([0.1779, 0.1871, 0.1368, 0.0903, 0.0997, 0.1583, 0.1978, 0.0594], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:28:08,715 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.011e+02 1.488e+02 1.773e+02 2.221e+02 3.199e+02, threshold=3.546e+02, percent-clipped=0.0 2023-03-27 09:28:19,996 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9430, 1.8355, 2.0334, 1.1432, 1.9887, 1.9839, 1.9629, 1.6422], device='cuda:6'), covar=tensor([0.0613, 0.0699, 0.0641, 0.0958, 0.0763, 0.0740, 0.0625, 0.1181], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0120, 0.0129, 0.0140, 0.0140, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:28:21,221 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154689.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:28:24,052 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154693.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:28:27,016 INFO [finetune.py:976] (6/7) Epoch 28, batch 50, loss[loss=0.1864, simple_loss=0.2558, pruned_loss=0.0585, over 4877.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2449, pruned_loss=0.04749, over 218166.90 frames. ], batch size: 32, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:28:32,660 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154705.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:28:35,800 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1363, 1.9996, 1.7865, 2.0432, 1.9316, 1.9582, 1.9709, 2.6743], device='cuda:6'), covar=tensor([0.3783, 0.4577, 0.3269, 0.3698, 0.4290, 0.2480, 0.3967, 0.1691], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0263, 0.0236, 0.0274, 0.0260, 0.0228, 0.0258, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:28:44,270 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154720.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:28:52,105 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154732.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:28:57,939 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:03,129 INFO [finetune.py:976] (6/7) Epoch 28, batch 100, loss[loss=0.131, simple_loss=0.2123, pruned_loss=0.02484, over 4780.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2378, pruned_loss=0.04648, over 379997.87 frames. ], batch size: 26, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:29:10,693 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2545, 2.9407, 2.8035, 1.2550, 2.9584, 2.2801, 0.7362, 1.8834], device='cuda:6'), covar=tensor([0.2405, 0.2287, 0.1996, 0.3747, 0.1646, 0.1145, 0.4122, 0.1745], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0179, 0.0159, 0.0130, 0.0161, 0.0123, 0.0148, 0.0124], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 09:29:11,915 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154753.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:29:13,667 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1609, 2.0044, 2.2027, 1.3988, 2.0846, 2.1325, 2.1320, 1.6673], device='cuda:6'), covar=tensor([0.0497, 0.0613, 0.0585, 0.0858, 0.0658, 0.0620, 0.0544, 0.1107], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0138, 0.0141, 0.0120, 0.0128, 0.0139, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:29:26,909 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.078e+02 1.413e+02 1.713e+02 2.093e+02 4.180e+02, threshold=3.426e+02, percent-clipped=2.0 2023-03-27 09:29:32,450 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154780.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:33,111 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=154781.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:37,839 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:29:44,900 INFO [finetune.py:976] (6/7) Epoch 28, batch 150, loss[loss=0.1299, simple_loss=0.1922, pruned_loss=0.03387, over 4833.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2339, pruned_loss=0.04552, over 509266.62 frames. ], batch size: 40, lr: 2.90e-03, grad_scale: 32.0 2023-03-27 09:29:51,263 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.45 vs. limit=2.0 2023-03-27 09:29:56,987 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1585, 1.9007, 1.6749, 1.9756, 2.5577, 2.2080, 2.1070, 1.6625], device='cuda:6'), covar=tensor([0.1965, 0.1830, 0.1769, 0.1627, 0.1546, 0.1066, 0.1876, 0.1821], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0200, 0.0246, 0.0192, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:29:57,825 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 09:29:59,453 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 09:30:06,040 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154830.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:30:18,388 INFO [finetune.py:976] (6/7) Epoch 28, batch 200, loss[loss=0.1801, simple_loss=0.252, pruned_loss=0.05413, over 4900.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2342, pruned_loss=0.04611, over 607521.98 frames. ], batch size: 43, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:30:40,611 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.190e+01 1.565e+02 1.831e+02 2.234e+02 3.641e+02, threshold=3.662e+02, percent-clipped=1.0 2023-03-27 09:30:48,055 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154877.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:31:02,679 INFO [finetune.py:976] (6/7) Epoch 28, batch 250, loss[loss=0.215, simple_loss=0.2892, pruned_loss=0.07042, over 4803.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2374, pruned_loss=0.04717, over 685210.76 frames. ], batch size: 41, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:31:14,954 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-27 09:31:20,576 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154925.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:31:31,424 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154942.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:31:34,983 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154947.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:31:35,506 INFO [finetune.py:976] (6/7) Epoch 28, batch 300, loss[loss=0.1571, simple_loss=0.2392, pruned_loss=0.03752, over 4855.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2417, pruned_loss=0.04786, over 746574.96 frames. ], batch size: 31, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:31:42,230 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.3638, 1.3392, 1.3214, 0.7477, 1.4057, 1.5177, 1.6439, 1.2454], device='cuda:6'), covar=tensor([0.0773, 0.0586, 0.0559, 0.0493, 0.0480, 0.0586, 0.0319, 0.0634], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0129, 0.0122, 0.0131, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8628e-05, 1.0545e-04, 9.1969e-05, 8.5533e-05, 9.1986e-05, 9.1993e-05, 1.0103e-04, 1.0706e-04], device='cuda:6') 2023-03-27 09:31:51,475 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.026e+02 1.523e+02 1.869e+02 2.212e+02 3.864e+02, threshold=3.739e+02, percent-clipped=1.0 2023-03-27 09:31:59,677 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=154976.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:07,920 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=154984.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:11,541 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154990.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:13,402 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=154993.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:32:15,017 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=154995.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:16,795 INFO [finetune.py:976] (6/7) Epoch 28, batch 350, loss[loss=0.1689, simple_loss=0.2502, pruned_loss=0.04384, over 4921.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2448, pruned_loss=0.04957, over 792379.14 frames. ], batch size: 33, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:32:29,169 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1070, 2.1200, 1.6306, 2.2564, 2.0598, 1.7643, 2.4086, 2.1255], device='cuda:6'), covar=tensor([0.1277, 0.2065, 0.2853, 0.2290, 0.2511, 0.1655, 0.2969, 0.1617], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0254, 0.0250, 0.0207, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:32:31,990 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.1554, 3.5598, 3.8398, 3.9929, 3.9292, 3.6792, 4.2488, 1.3825], device='cuda:6'), covar=tensor([0.0906, 0.0933, 0.0825, 0.1099, 0.1255, 0.1516, 0.0737, 0.6012], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0248, 0.0285, 0.0297, 0.0336, 0.0288, 0.0306, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:32:43,065 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155037.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:32:45,384 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155041.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 09:32:49,883 INFO [finetune.py:976] (6/7) Epoch 28, batch 400, loss[loss=0.1621, simple_loss=0.2217, pruned_loss=0.05127, over 4782.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2456, pruned_loss=0.04935, over 830485.67 frames. ], batch size: 26, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:33:13,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155069.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:33:15,063 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.906e+01 1.559e+02 1.879e+02 2.352e+02 4.263e+02, threshold=3.758e+02, percent-clipped=3.0 2023-03-27 09:33:18,283 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155076.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:33:19,636 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.63 vs. limit=5.0 2023-03-27 09:33:31,494 INFO [finetune.py:976] (6/7) Epoch 28, batch 450, loss[loss=0.1421, simple_loss=0.2215, pruned_loss=0.03132, over 4697.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2441, pruned_loss=0.04884, over 857595.04 frames. ], batch size: 23, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:33:54,284 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:33:54,328 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:34:03,774 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.75 vs. limit=5.0 2023-03-27 09:34:05,151 INFO [finetune.py:976] (6/7) Epoch 28, batch 500, loss[loss=0.1699, simple_loss=0.2303, pruned_loss=0.05475, over 4833.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2412, pruned_loss=0.04824, over 880110.60 frames. ], batch size: 33, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:34:28,556 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.687e+01 1.475e+02 1.683e+02 2.204e+02 4.497e+02, threshold=3.366e+02, percent-clipped=1.0 2023-03-27 09:34:37,154 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155178.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:34:49,247 INFO [finetune.py:976] (6/7) Epoch 28, batch 550, loss[loss=0.1457, simple_loss=0.2212, pruned_loss=0.03516, over 4821.00 frames. ], tot_loss[loss=0.1648, simple_loss=0.237, pruned_loss=0.0463, over 895834.25 frames. ], batch size: 41, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:35:08,202 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2758, 1.3998, 1.2047, 1.3830, 1.6408, 1.5721, 1.4476, 1.3530], device='cuda:6'), covar=tensor([0.0483, 0.0333, 0.0658, 0.0339, 0.0245, 0.0468, 0.0405, 0.0435], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0106, 0.0148, 0.0112, 0.0102, 0.0116, 0.0103, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.8341e-05, 8.1468e-05, 1.1540e-04, 8.5081e-05, 7.8825e-05, 8.5177e-05, 7.6698e-05, 8.6182e-05], device='cuda:6') 2023-03-27 09:35:23,082 INFO [finetune.py:976] (6/7) Epoch 28, batch 600, loss[loss=0.2425, simple_loss=0.2941, pruned_loss=0.09544, over 4125.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2384, pruned_loss=0.04755, over 907610.97 frames. ], batch size: 65, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:35:32,585 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6847, 1.6988, 1.5798, 1.0320, 1.8222, 1.9810, 1.9239, 1.4954], device='cuda:6'), covar=tensor([0.1046, 0.0717, 0.0566, 0.0597, 0.0521, 0.0734, 0.0405, 0.0822], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0122, 0.0132, 0.0130, 0.0143, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8888e-05, 1.0558e-04, 9.2272e-05, 8.5837e-05, 9.2188e-05, 9.2380e-05, 1.0138e-04, 1.0739e-04], device='cuda:6') 2023-03-27 09:35:38,978 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.640e+01 1.454e+02 1.702e+02 1.998e+02 4.828e+02, threshold=3.403e+02, percent-clipped=2.0 2023-03-27 09:35:56,678 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155284.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:36:05,097 INFO [finetune.py:976] (6/7) Epoch 28, batch 650, loss[loss=0.1638, simple_loss=0.2482, pruned_loss=0.03971, over 4913.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2407, pruned_loss=0.04805, over 919043.41 frames. ], batch size: 43, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:36:07,729 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3135, 2.3625, 1.7472, 2.5849, 2.2649, 1.9256, 2.7954, 2.4445], device='cuda:6'), covar=tensor([0.1204, 0.2145, 0.2810, 0.2390, 0.2425, 0.1573, 0.2877, 0.1571], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0254, 0.0250, 0.0206, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:36:29,095 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155332.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:36:29,105 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155332.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:36:31,597 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3039, 2.1808, 1.8250, 2.2094, 2.2154, 1.9807, 2.5117, 2.3468], device='cuda:6'), covar=tensor([0.1348, 0.2142, 0.2924, 0.2533, 0.2566, 0.1685, 0.3056, 0.1701], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0254, 0.0250, 0.0207, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:36:38,729 INFO [finetune.py:976] (6/7) Epoch 28, batch 700, loss[loss=0.1835, simple_loss=0.2666, pruned_loss=0.05021, over 4806.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2416, pruned_loss=0.04849, over 926307.82 frames. ], batch size: 41, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:36:54,659 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.124e+02 1.563e+02 1.812e+02 2.261e+02 4.160e+02, threshold=3.625e+02, percent-clipped=3.0 2023-03-27 09:36:57,795 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155376.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:37:05,566 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2089, 4.5462, 4.7893, 5.0406, 4.9558, 4.6783, 5.3540, 1.5296], device='cuda:6'), covar=tensor([0.0759, 0.0881, 0.0751, 0.0999, 0.1160, 0.1689, 0.0532, 0.6159], device='cuda:6'), in_proj_covar=tensor([0.0352, 0.0247, 0.0284, 0.0297, 0.0335, 0.0287, 0.0305, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:37:19,494 INFO [finetune.py:976] (6/7) Epoch 28, batch 750, loss[loss=0.1449, simple_loss=0.2147, pruned_loss=0.03749, over 4684.00 frames. ], tot_loss[loss=0.1705, simple_loss=0.2433, pruned_loss=0.04887, over 934226.78 frames. ], batch size: 23, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:37:40,161 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155424.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:37:41,308 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155425.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:37:50,143 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7461, 1.5807, 1.5646, 1.6453, 1.3275, 3.5816, 1.3924, 1.7404], device='cuda:6'), covar=tensor([0.3289, 0.2529, 0.2209, 0.2492, 0.1681, 0.0223, 0.2647, 0.1291], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 09:37:56,712 INFO [finetune.py:976] (6/7) Epoch 28, batch 800, loss[loss=0.1703, simple_loss=0.2453, pruned_loss=0.04768, over 4715.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2419, pruned_loss=0.04778, over 938629.53 frames. ], batch size: 59, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:38:02,351 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155457.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:38:11,704 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 09:38:11,935 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.111e+02 1.495e+02 1.724e+02 1.967e+02 3.002e+02, threshold=3.447e+02, percent-clipped=0.0 2023-03-27 09:38:39,880 INFO [finetune.py:976] (6/7) Epoch 28, batch 850, loss[loss=0.1382, simple_loss=0.2095, pruned_loss=0.03339, over 4864.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2409, pruned_loss=0.04779, over 942397.56 frames. ], batch size: 31, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:38:52,663 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155518.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:39:03,290 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2292, 1.9642, 2.2001, 2.1180, 1.8926, 1.9028, 2.1070, 2.1301], device='cuda:6'), covar=tensor([0.3861, 0.3576, 0.2919, 0.4240, 0.5065, 0.4483, 0.4483, 0.2724], device='cuda:6'), in_proj_covar=tensor([0.0266, 0.0248, 0.0268, 0.0297, 0.0296, 0.0273, 0.0302, 0.0252], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:39:07,985 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8961, 1.8047, 1.5874, 1.4784, 1.9275, 1.6713, 1.8194, 1.9015], device='cuda:6'), covar=tensor([0.1402, 0.1938, 0.2979, 0.2523, 0.2624, 0.1721, 0.2913, 0.1771], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0190, 0.0236, 0.0255, 0.0249, 0.0207, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:39:09,200 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1606, 2.1322, 1.7785, 2.1330, 2.1809, 1.9490, 2.3848, 2.2111], device='cuda:6'), covar=tensor([0.1178, 0.1839, 0.2600, 0.2196, 0.2223, 0.1501, 0.2776, 0.1484], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0190, 0.0236, 0.0255, 0.0249, 0.0207, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:39:13,738 INFO [finetune.py:976] (6/7) Epoch 28, batch 900, loss[loss=0.1365, simple_loss=0.2062, pruned_loss=0.03344, over 4908.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2378, pruned_loss=0.04648, over 946719.44 frames. ], batch size: 36, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:39:28,252 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.783e+01 1.413e+02 1.787e+02 2.288e+02 4.282e+02, threshold=3.575e+02, percent-clipped=3.0 2023-03-27 09:39:54,065 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.77 vs. limit=2.0 2023-03-27 09:39:54,475 INFO [finetune.py:976] (6/7) Epoch 28, batch 950, loss[loss=0.1486, simple_loss=0.2, pruned_loss=0.0486, over 4059.00 frames. ], tot_loss[loss=0.1639, simple_loss=0.2362, pruned_loss=0.04584, over 949500.12 frames. ], batch size: 17, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:40:02,632 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155605.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:20,529 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155632.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:31,714 INFO [finetune.py:976] (6/7) Epoch 28, batch 1000, loss[loss=0.1669, simple_loss=0.2315, pruned_loss=0.05113, over 4766.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2403, pruned_loss=0.04772, over 949918.84 frames. ], batch size: 27, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:40:37,313 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155657.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:42,777 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155666.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:40:45,715 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.619e+01 1.519e+02 1.814e+02 2.190e+02 3.109e+02, threshold=3.628e+02, percent-clipped=0.0 2023-03-27 09:40:49,111 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 09:40:54,192 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155680.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:41:13,974 INFO [finetune.py:976] (6/7) Epoch 28, batch 1050, loss[loss=0.1846, simple_loss=0.2578, pruned_loss=0.05573, over 4761.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2425, pruned_loss=0.04789, over 951668.26 frames. ], batch size: 28, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:41:26,784 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155718.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:41:29,277 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 09:41:31,012 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=155725.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:41:35,894 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.64 vs. limit=2.0 2023-03-27 09:41:45,046 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4828, 2.4510, 2.2093, 2.6410, 2.9438, 2.6136, 2.5599, 2.0404], device='cuda:6'), covar=tensor([0.1982, 0.1697, 0.1710, 0.1428, 0.1527, 0.0992, 0.1753, 0.1755], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0213, 0.0217, 0.0201, 0.0248, 0.0192, 0.0218, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:41:46,714 INFO [finetune.py:976] (6/7) Epoch 28, batch 1100, loss[loss=0.1125, simple_loss=0.1934, pruned_loss=0.01579, over 4773.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2441, pruned_loss=0.04835, over 953249.55 frames. ], batch size: 26, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:41:49,473 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.92 vs. limit=5.0 2023-03-27 09:41:56,302 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4289, 2.3083, 2.1401, 1.0265, 2.2535, 1.8557, 1.7287, 2.1606], device='cuda:6'), covar=tensor([0.1120, 0.0869, 0.1551, 0.2019, 0.1529, 0.2437, 0.2328, 0.1132], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0192, 0.0202, 0.0182, 0.0210, 0.0212, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:42:01,622 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.104e+02 1.629e+02 1.915e+02 2.256e+02 9.973e+02, threshold=3.830e+02, percent-clipped=2.0 2023-03-27 09:42:02,899 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=155773.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:42:10,613 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=155784.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:42:12,865 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.90 vs. limit=2.0 2023-03-27 09:42:21,325 INFO [finetune.py:976] (6/7) Epoch 28, batch 1150, loss[loss=0.1672, simple_loss=0.241, pruned_loss=0.04674, over 4755.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2464, pruned_loss=0.04938, over 953032.86 frames. ], batch size: 28, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:42:23,720 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.13 vs. limit=2.0 2023-03-27 09:42:39,762 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155813.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:42:55,943 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6401, 2.5026, 2.1270, 1.0071, 2.2224, 2.0778, 1.9329, 2.2645], device='cuda:6'), covar=tensor([0.0872, 0.0790, 0.1451, 0.1991, 0.1293, 0.2133, 0.2061, 0.0896], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0191, 0.0201, 0.0181, 0.0210, 0.0211, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:43:00,220 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 09:43:01,187 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=155845.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:43:02,854 INFO [finetune.py:976] (6/7) Epoch 28, batch 1200, loss[loss=0.154, simple_loss=0.2242, pruned_loss=0.04188, over 4753.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2442, pruned_loss=0.04869, over 952826.76 frames. ], batch size: 28, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:43:18,195 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.526e+02 1.746e+02 2.167e+02 3.236e+02, threshold=3.492e+02, percent-clipped=0.0 2023-03-27 09:43:26,158 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1404, 1.3095, 1.3899, 0.6120, 1.3372, 1.5693, 1.6241, 1.2885], device='cuda:6'), covar=tensor([0.0926, 0.0701, 0.0547, 0.0552, 0.0520, 0.0633, 0.0344, 0.0667], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0147, 0.0130, 0.0123, 0.0132, 0.0131, 0.0143, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8845e-05, 1.0563e-04, 9.2553e-05, 8.6181e-05, 9.2435e-05, 9.2454e-05, 1.0168e-04, 1.0740e-04], device='cuda:6') 2023-03-27 09:43:45,613 INFO [finetune.py:976] (6/7) Epoch 28, batch 1250, loss[loss=0.1738, simple_loss=0.2401, pruned_loss=0.05371, over 4934.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2421, pruned_loss=0.04879, over 953377.65 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:43:58,882 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4124, 3.7915, 4.0492, 4.2677, 4.1762, 3.9044, 4.4884, 1.5338], device='cuda:6'), covar=tensor([0.0840, 0.1057, 0.0955, 0.1156, 0.1246, 0.1721, 0.0650, 0.5842], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0247, 0.0285, 0.0297, 0.0335, 0.0289, 0.0306, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:44:22,108 INFO [finetune.py:976] (6/7) Epoch 28, batch 1300, loss[loss=0.2245, simple_loss=0.2872, pruned_loss=0.08093, over 4936.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2402, pruned_loss=0.0482, over 953664.98 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:44:32,019 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=155961.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:44:38,007 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.818e+01 1.476e+02 1.729e+02 2.197e+02 4.050e+02, threshold=3.458e+02, percent-clipped=1.0 2023-03-27 09:44:53,638 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7593, 1.6399, 1.3986, 1.3263, 1.5454, 1.5431, 1.5361, 2.1288], device='cuda:6'), covar=tensor([0.3594, 0.3500, 0.2965, 0.3311, 0.3420, 0.2357, 0.3322, 0.1626], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0265, 0.0238, 0.0276, 0.0261, 0.0231, 0.0260, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:44:55,320 INFO [finetune.py:976] (6/7) Epoch 28, batch 1350, loss[loss=0.1505, simple_loss=0.2171, pruned_loss=0.04199, over 4795.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2405, pruned_loss=0.04876, over 952468.44 frames. ], batch size: 26, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:44:56,001 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9346, 4.6231, 4.3417, 2.5051, 4.7664, 3.4716, 0.8717, 3.2749], device='cuda:6'), covar=tensor([0.2450, 0.1814, 0.1459, 0.3026, 0.0847, 0.0889, 0.4589, 0.1322], device='cuda:6'), in_proj_covar=tensor([0.0149, 0.0179, 0.0160, 0.0130, 0.0163, 0.0123, 0.0148, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 09:45:10,151 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156013.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:45:30,431 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5309, 1.3862, 1.8700, 2.9127, 1.9525, 2.1347, 1.0216, 2.5078], device='cuda:6'), covar=tensor([0.1687, 0.1453, 0.1249, 0.0573, 0.0860, 0.1420, 0.1772, 0.0463], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0167, 0.0101, 0.0137, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:45:31,631 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.3793, 3.8253, 4.0356, 4.2523, 4.1676, 3.9147, 4.4929, 1.3512], device='cuda:6'), covar=tensor([0.0869, 0.0890, 0.0893, 0.0986, 0.1209, 0.1666, 0.0596, 0.6402], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0284, 0.0296, 0.0334, 0.0287, 0.0304, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:45:32,752 INFO [finetune.py:976] (6/7) Epoch 28, batch 1400, loss[loss=0.1836, simple_loss=0.2627, pruned_loss=0.05231, over 4825.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2419, pruned_loss=0.04836, over 952631.66 frames. ], batch size: 39, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:45:48,238 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156070.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:45:48,710 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.532e+02 1.806e+02 2.221e+02 4.474e+02, threshold=3.612e+02, percent-clipped=3.0 2023-03-27 09:46:06,150 INFO [finetune.py:976] (6/7) Epoch 28, batch 1450, loss[loss=0.1614, simple_loss=0.2361, pruned_loss=0.04333, over 4928.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.2432, pruned_loss=0.04862, over 952566.40 frames. ], batch size: 38, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:46:14,584 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8700, 1.8041, 1.5251, 1.9343, 2.3229, 2.0089, 1.6222, 1.5096], device='cuda:6'), covar=tensor([0.1995, 0.1779, 0.1848, 0.1645, 0.1623, 0.1105, 0.2407, 0.1834], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0200, 0.0246, 0.0190, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:46:23,161 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156113.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:46:35,155 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156131.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:46:40,536 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156140.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 09:46:45,775 INFO [finetune.py:976] (6/7) Epoch 28, batch 1500, loss[loss=0.2012, simple_loss=0.2849, pruned_loss=0.05875, over 4722.00 frames. ], tot_loss[loss=0.1722, simple_loss=0.2453, pruned_loss=0.04955, over 951629.40 frames. ], batch size: 54, lr: 2.89e-03, grad_scale: 64.0 2023-03-27 09:46:47,660 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.66 vs. limit=2.0 2023-03-27 09:46:54,182 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156161.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:47:02,075 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.585e+02 1.899e+02 2.331e+02 3.577e+02, threshold=3.799e+02, percent-clipped=0.0 2023-03-27 09:47:18,932 INFO [finetune.py:976] (6/7) Epoch 28, batch 1550, loss[loss=0.1949, simple_loss=0.2646, pruned_loss=0.06262, over 4815.00 frames. ], tot_loss[loss=0.1709, simple_loss=0.2441, pruned_loss=0.0488, over 953321.65 frames. ], batch size: 33, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:47:21,361 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 09:47:30,565 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156216.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:47:59,292 INFO [finetune.py:976] (6/7) Epoch 28, batch 1600, loss[loss=0.2315, simple_loss=0.2821, pruned_loss=0.09042, over 4838.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2422, pruned_loss=0.04852, over 951177.79 frames. ], batch size: 49, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:48:08,686 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156261.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:15,839 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.043e+02 1.435e+02 1.767e+02 2.111e+02 3.704e+02, threshold=3.535e+02, percent-clipped=0.0 2023-03-27 09:48:19,962 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:27,879 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156290.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:32,613 INFO [finetune.py:976] (6/7) Epoch 28, batch 1650, loss[loss=0.1425, simple_loss=0.2268, pruned_loss=0.02916, over 4827.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2395, pruned_loss=0.04755, over 953455.12 frames. ], batch size: 41, lr: 2.89e-03, grad_scale: 32.0 2023-03-27 09:48:38,731 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5375, 1.3882, 1.2808, 1.5278, 1.7275, 1.5304, 1.2291, 1.3280], device='cuda:6'), covar=tensor([0.1704, 0.1708, 0.1550, 0.1361, 0.1412, 0.1038, 0.2410, 0.1499], device='cuda:6'), in_proj_covar=tensor([0.0245, 0.0212, 0.0215, 0.0199, 0.0245, 0.0190, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:48:40,817 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156309.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:48:43,231 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156313.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:49:18,366 INFO [finetune.py:976] (6/7) Epoch 28, batch 1700, loss[loss=0.1876, simple_loss=0.2572, pruned_loss=0.059, over 4916.00 frames. ], tot_loss[loss=0.1656, simple_loss=0.2373, pruned_loss=0.047, over 952040.65 frames. ], batch size: 37, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:49:20,334 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156351.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:49:20,403 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.77 vs. limit=5.0 2023-03-27 09:49:27,453 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156361.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:49:34,437 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.260e+01 1.525e+02 1.796e+02 2.204e+02 4.546e+02, threshold=3.593e+02, percent-clipped=3.0 2023-03-27 09:49:51,285 INFO [finetune.py:976] (6/7) Epoch 28, batch 1750, loss[loss=0.1666, simple_loss=0.2488, pruned_loss=0.0422, over 4759.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2396, pruned_loss=0.04774, over 952655.02 frames. ], batch size: 59, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:49:59,751 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.71 vs. limit=5.0 2023-03-27 09:50:08,431 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.58 vs. limit=2.0 2023-03-27 09:50:09,512 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156426.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 09:50:19,440 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156440.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:50:19,733 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 09:50:24,162 INFO [finetune.py:976] (6/7) Epoch 28, batch 1800, loss[loss=0.1734, simple_loss=0.2555, pruned_loss=0.04566, over 4932.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2442, pruned_loss=0.04929, over 953221.38 frames. ], batch size: 38, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:50:39,991 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.522e+02 1.851e+02 2.291e+02 4.651e+02, threshold=3.702e+02, percent-clipped=5.0 2023-03-27 09:50:51,504 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156488.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:50:57,545 INFO [finetune.py:976] (6/7) Epoch 28, batch 1850, loss[loss=0.1905, simple_loss=0.2562, pruned_loss=0.06235, over 4882.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.246, pruned_loss=0.04979, over 953175.42 frames. ], batch size: 35, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:51:40,565 INFO [finetune.py:976] (6/7) Epoch 28, batch 1900, loss[loss=0.1322, simple_loss=0.2102, pruned_loss=0.02713, over 4774.00 frames. ], tot_loss[loss=0.1723, simple_loss=0.2459, pruned_loss=0.04938, over 954682.22 frames. ], batch size: 28, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:51:56,034 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.831e+01 1.540e+02 1.871e+02 2.242e+02 4.934e+02, threshold=3.741e+02, percent-clipped=1.0 2023-03-27 09:51:56,112 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:52:06,803 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 09:52:13,684 INFO [finetune.py:976] (6/7) Epoch 28, batch 1950, loss[loss=0.2092, simple_loss=0.2724, pruned_loss=0.07304, over 4819.00 frames. ], tot_loss[loss=0.1729, simple_loss=0.2463, pruned_loss=0.0498, over 954150.99 frames. ], batch size: 39, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:52:46,375 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=156646.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:52:47,508 INFO [finetune.py:976] (6/7) Epoch 28, batch 2000, loss[loss=0.1535, simple_loss=0.2084, pruned_loss=0.04928, over 4762.00 frames. ], tot_loss[loss=0.1712, simple_loss=0.2436, pruned_loss=0.04938, over 955472.96 frames. ], batch size: 27, lr: 2.88e-03, grad_scale: 32.0 2023-03-27 09:53:04,322 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.919e+01 1.529e+02 1.760e+02 2.169e+02 4.761e+02, threshold=3.520e+02, percent-clipped=1.0 2023-03-27 09:53:29,982 INFO [finetune.py:976] (6/7) Epoch 28, batch 2050, loss[loss=0.172, simple_loss=0.2413, pruned_loss=0.05137, over 4914.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2396, pruned_loss=0.04818, over 955171.18 frames. ], batch size: 43, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:53:30,072 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3595, 1.3745, 1.6967, 2.4617, 1.6240, 2.1802, 1.0214, 2.1927], device='cuda:6'), covar=tensor([0.1794, 0.1382, 0.1162, 0.0796, 0.0973, 0.1257, 0.1528, 0.0559], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0167, 0.0101, 0.0137, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 09:53:34,161 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9246, 4.1933, 4.1115, 2.2384, 4.4330, 3.3208, 0.7782, 3.0170], device='cuda:6'), covar=tensor([0.2696, 0.2120, 0.1387, 0.3252, 0.0798, 0.0844, 0.4802, 0.1488], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0179, 0.0159, 0.0129, 0.0163, 0.0123, 0.0148, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 09:53:47,934 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156726.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 09:54:08,957 INFO [finetune.py:976] (6/7) Epoch 28, batch 2100, loss[loss=0.1735, simple_loss=0.2604, pruned_loss=0.04329, over 4860.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2389, pruned_loss=0.04842, over 953434.42 frames. ], batch size: 47, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:54:10,264 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156749.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:54:37,355 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.59 vs. limit=2.0 2023-03-27 09:54:37,771 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.070e+02 1.521e+02 1.844e+02 2.179e+02 3.224e+02, threshold=3.687e+02, percent-clipped=0.0 2023-03-27 09:54:38,475 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156774.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:54:54,961 INFO [finetune.py:976] (6/7) Epoch 28, batch 2150, loss[loss=0.2127, simple_loss=0.2719, pruned_loss=0.07671, over 4277.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.2432, pruned_loss=0.04969, over 952179.30 frames. ], batch size: 65, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:55:03,481 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156810.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:55:13,220 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.53 vs. limit=2.0 2023-03-27 09:55:23,011 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.88 vs. limit=5.0 2023-03-27 09:55:27,791 INFO [finetune.py:976] (6/7) Epoch 28, batch 2200, loss[loss=0.1979, simple_loss=0.2659, pruned_loss=0.06497, over 4869.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2443, pruned_loss=0.04991, over 951533.41 frames. ], batch size: 31, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:55:44,139 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:55:44,655 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.040e+02 1.543e+02 1.740e+02 2.127e+02 4.555e+02, threshold=3.480e+02, percent-clipped=1.0 2023-03-27 09:55:52,578 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=156885.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:01,290 INFO [finetune.py:976] (6/7) Epoch 28, batch 2250, loss[loss=0.147, simple_loss=0.2223, pruned_loss=0.03585, over 4790.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2455, pruned_loss=0.05032, over 950674.17 frames. ], batch size: 25, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:56:15,944 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 09:56:16,219 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156920.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:17,863 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.66 vs. limit=5.0 2023-03-27 09:56:32,864 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=156946.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:32,889 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=156946.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:56:33,998 INFO [finetune.py:976] (6/7) Epoch 28, batch 2300, loss[loss=0.1531, simple_loss=0.2288, pruned_loss=0.03872, over 4771.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2445, pruned_loss=0.04915, over 951938.77 frames. ], batch size: 28, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:57:00,121 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.437e+02 1.655e+02 2.039e+02 3.893e+02, threshold=3.311e+02, percent-clipped=1.0 2023-03-27 09:57:17,549 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=156994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:57:20,393 INFO [finetune.py:976] (6/7) Epoch 28, batch 2350, loss[loss=0.1317, simple_loss=0.219, pruned_loss=0.02222, over 4819.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2434, pruned_loss=0.04874, over 955156.63 frames. ], batch size: 40, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:57:27,636 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3773, 2.9446, 2.7791, 1.2859, 3.0062, 2.1942, 0.8441, 1.9157], device='cuda:6'), covar=tensor([0.2449, 0.2191, 0.1978, 0.3465, 0.1502, 0.1131, 0.3966, 0.1813], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0179, 0.0160, 0.0129, 0.0163, 0.0123, 0.0148, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 09:57:28,244 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157010.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:57:44,754 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1647, 1.8011, 2.3985, 1.5604, 2.1265, 2.3319, 1.6833, 2.5373], device='cuda:6'), covar=tensor([0.1246, 0.1878, 0.1372, 0.1905, 0.0878, 0.1222, 0.2920, 0.0783], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0194, 0.0190, 0.0175, 0.0214, 0.0220, 0.0200], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:57:52,969 INFO [finetune.py:976] (6/7) Epoch 28, batch 2400, loss[loss=0.1904, simple_loss=0.244, pruned_loss=0.06834, over 4083.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2409, pruned_loss=0.04827, over 954654.30 frames. ], batch size: 18, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:58:08,942 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157071.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:58:12,823 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.019e+02 1.490e+02 1.799e+02 2.218e+02 3.254e+02, threshold=3.597e+02, percent-clipped=0.0 2023-03-27 09:58:26,393 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7861, 1.7238, 1.5327, 1.9010, 2.1163, 1.9154, 1.4917, 1.5223], device='cuda:6'), covar=tensor([0.2164, 0.1906, 0.1914, 0.1657, 0.1680, 0.1133, 0.2242, 0.1833], device='cuda:6'), in_proj_covar=tensor([0.0244, 0.0210, 0.0214, 0.0198, 0.0244, 0.0190, 0.0215, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 09:58:28,593 INFO [finetune.py:976] (6/7) Epoch 28, batch 2450, loss[loss=0.1704, simple_loss=0.2478, pruned_loss=0.04647, over 4820.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2379, pruned_loss=0.0469, over 955026.15 frames. ], batch size: 40, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:58:33,535 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:58:57,736 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157142.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 09:59:01,794 INFO [finetune.py:976] (6/7) Epoch 28, batch 2500, loss[loss=0.2167, simple_loss=0.2969, pruned_loss=0.06826, over 4858.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2399, pruned_loss=0.04749, over 956634.13 frames. ], batch size: 44, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:59:02,783 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.59 vs. limit=5.0 2023-03-27 09:59:23,961 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.586e+02 1.871e+02 2.257e+02 5.817e+02, threshold=3.742e+02, percent-clipped=3.0 2023-03-27 09:59:51,557 INFO [finetune.py:976] (6/7) Epoch 28, batch 2550, loss[loss=0.1631, simple_loss=0.2441, pruned_loss=0.04108, over 4815.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2431, pruned_loss=0.04816, over 954459.28 frames. ], batch size: 41, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 09:59:55,380 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157203.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:00:06,844 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.87 vs. limit=2.0 2023-03-27 10:00:20,723 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157241.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:00:24,905 INFO [finetune.py:976] (6/7) Epoch 28, batch 2600, loss[loss=0.1743, simple_loss=0.2468, pruned_loss=0.05086, over 4906.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2439, pruned_loss=0.04866, over 952550.23 frames. ], batch size: 37, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:00:41,404 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4357, 2.0112, 2.6513, 1.6385, 2.3037, 2.5401, 1.8524, 2.7017], device='cuda:6'), covar=tensor([0.1206, 0.2184, 0.1591, 0.2134, 0.0928, 0.1508, 0.2995, 0.0787], device='cuda:6'), in_proj_covar=tensor([0.0193, 0.0208, 0.0195, 0.0190, 0.0176, 0.0214, 0.0221, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:00:41,899 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.034e+02 1.561e+02 1.846e+02 2.212e+02 4.271e+02, threshold=3.692e+02, percent-clipped=1.0 2023-03-27 10:00:53,066 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3930, 2.3976, 2.0542, 2.4043, 2.9456, 2.5237, 2.4325, 1.9019], device='cuda:6'), covar=tensor([0.2025, 0.1650, 0.1768, 0.1548, 0.1391, 0.0988, 0.1759, 0.1780], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0216, 0.0200, 0.0246, 0.0191, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:00:57,762 INFO [finetune.py:976] (6/7) Epoch 28, batch 2650, loss[loss=0.1377, simple_loss=0.2183, pruned_loss=0.02854, over 4749.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2449, pruned_loss=0.04938, over 952421.32 frames. ], batch size: 27, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:01:08,878 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6566, 2.3249, 2.0424, 1.0163, 2.1407, 2.0100, 1.8685, 2.1485], device='cuda:6'), covar=tensor([0.0982, 0.0844, 0.1567, 0.1987, 0.1462, 0.2427, 0.2265, 0.0888], device='cuda:6'), in_proj_covar=tensor([0.0174, 0.0193, 0.0204, 0.0183, 0.0213, 0.0213, 0.0227, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:01:30,662 INFO [finetune.py:976] (6/7) Epoch 28, batch 2700, loss[loss=0.1427, simple_loss=0.2142, pruned_loss=0.03562, over 4905.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2431, pruned_loss=0.04824, over 954914.62 frames. ], batch size: 43, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:01:35,646 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157356.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:01:42,597 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157366.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:01:47,170 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.455e+02 1.749e+02 2.146e+02 4.370e+02, threshold=3.498e+02, percent-clipped=1.0 2023-03-27 10:01:49,990 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:01:53,262 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157382.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:12,808 INFO [finetune.py:976] (6/7) Epoch 28, batch 2750, loss[loss=0.1619, simple_loss=0.2397, pruned_loss=0.04209, over 4764.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2401, pruned_loss=0.04759, over 954085.76 frames. ], batch size: 26, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:02:20,337 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157404.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:02:20,931 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157405.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:26,300 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.63 vs. limit=2.0 2023-03-27 10:02:26,829 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2099, 2.1206, 1.7344, 2.0896, 2.1443, 1.8837, 2.3944, 2.2050], device='cuda:6'), covar=tensor([0.1232, 0.1938, 0.2884, 0.2418, 0.2344, 0.1626, 0.3043, 0.1550], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0191, 0.0238, 0.0255, 0.0251, 0.0208, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:02:29,165 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157417.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:46,865 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157443.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:02:50,283 INFO [finetune.py:976] (6/7) Epoch 28, batch 2800, loss[loss=0.1571, simple_loss=0.2266, pruned_loss=0.04385, over 4765.00 frames. ], tot_loss[loss=0.1645, simple_loss=0.2365, pruned_loss=0.04623, over 953344.91 frames. ], batch size: 26, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:02:53,300 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157453.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:03:01,066 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157465.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 10:03:04,133 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6134, 1.5948, 1.5221, 0.7573, 1.7874, 1.9534, 1.8055, 1.4542], device='cuda:6'), covar=tensor([0.1089, 0.0704, 0.0663, 0.0701, 0.0570, 0.0587, 0.0392, 0.0779], device='cuda:6'), in_proj_covar=tensor([0.0122, 0.0148, 0.0130, 0.0123, 0.0132, 0.0130, 0.0142, 0.0151], device='cuda:6'), out_proj_covar=tensor([8.8661e-05, 1.0617e-04, 9.2133e-05, 8.6118e-05, 9.2448e-05, 9.2340e-05, 1.0112e-04, 1.0793e-04], device='cuda:6') 2023-03-27 10:03:06,316 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.288e+01 1.520e+02 1.688e+02 2.071e+02 7.416e+02, threshold=3.376e+02, percent-clipped=4.0 2023-03-27 10:03:23,413 INFO [finetune.py:976] (6/7) Epoch 28, batch 2850, loss[loss=0.1669, simple_loss=0.229, pruned_loss=0.05243, over 4927.00 frames. ], tot_loss[loss=0.1633, simple_loss=0.2352, pruned_loss=0.04563, over 955127.51 frames. ], batch size: 33, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:03:23,481 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157498.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:03:28,870 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157506.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:03:44,268 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0401, 1.8781, 1.6295, 1.6290, 1.8327, 1.7886, 1.8813, 2.5285], device='cuda:6'), covar=tensor([0.3780, 0.4192, 0.3354, 0.3683, 0.3793, 0.2459, 0.3653, 0.1754], device='cuda:6'), in_proj_covar=tensor([0.0289, 0.0266, 0.0238, 0.0276, 0.0261, 0.0231, 0.0259, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:03:52,480 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157541.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:03:57,080 INFO [finetune.py:976] (6/7) Epoch 28, batch 2900, loss[loss=0.2066, simple_loss=0.2763, pruned_loss=0.06845, over 4895.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2397, pruned_loss=0.04723, over 954661.23 frames. ], batch size: 35, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:04:03,733 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4081, 1.3117, 1.6059, 2.2893, 1.5831, 2.0936, 1.0002, 1.9936], device='cuda:6'), covar=tensor([0.1496, 0.1226, 0.1000, 0.0630, 0.0866, 0.1116, 0.1316, 0.0575], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0117, 0.0134, 0.0166, 0.0101, 0.0137, 0.0126, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:04:09,795 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157567.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:04:10,375 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157568.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:04:11,079 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.93 vs. limit=5.0 2023-03-27 10:04:13,240 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.581e+01 1.484e+02 1.768e+02 2.098e+02 4.175e+02, threshold=3.535e+02, percent-clipped=1.0 2023-03-27 10:04:21,023 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3973, 2.1137, 2.3958, 2.4116, 2.1194, 2.1551, 2.3884, 2.2685], device='cuda:6'), covar=tensor([0.4008, 0.4105, 0.3233, 0.3987, 0.5114, 0.4105, 0.5239, 0.2968], device='cuda:6'), in_proj_covar=tensor([0.0268, 0.0249, 0.0270, 0.0298, 0.0298, 0.0275, 0.0304, 0.0254], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:04:24,448 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157589.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:04:32,418 INFO [finetune.py:976] (6/7) Epoch 28, batch 2950, loss[loss=0.1801, simple_loss=0.2473, pruned_loss=0.05646, over 4131.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.243, pruned_loss=0.04869, over 952681.84 frames. ], batch size: 65, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:05:06,300 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.72 vs. limit=2.0 2023-03-27 10:05:06,828 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157629.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:05:13,665 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 10:05:23,715 INFO [finetune.py:976] (6/7) Epoch 28, batch 3000, loss[loss=0.2057, simple_loss=0.2694, pruned_loss=0.07098, over 4882.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2439, pruned_loss=0.04874, over 954387.50 frames. ], batch size: 32, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:05:23,716 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 10:05:30,478 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6528, 3.5647, 3.4521, 1.5481, 3.6248, 2.9584, 0.9793, 2.5586], device='cuda:6'), covar=tensor([0.1878, 0.1416, 0.1478, 0.3066, 0.1051, 0.0893, 0.3321, 0.1380], device='cuda:6'), in_proj_covar=tensor([0.0153, 0.0182, 0.0162, 0.0132, 0.0166, 0.0125, 0.0151, 0.0127], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 10:05:34,508 INFO [finetune.py:1010] (6/7) Epoch 28, validation: loss=0.1567, simple_loss=0.2243, pruned_loss=0.04455, over 2265189.00 frames. 2023-03-27 10:05:34,508 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 10:05:43,589 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4350, 2.2924, 1.7602, 2.3941, 2.3114, 1.9517, 2.6281, 2.4092], device='cuda:6'), covar=tensor([0.1235, 0.1973, 0.3032, 0.2335, 0.2364, 0.1700, 0.2616, 0.1715], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0237, 0.0254, 0.0250, 0.0207, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:05:46,468 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157666.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:05:50,620 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.041e+02 1.524e+02 1.841e+02 2.248e+02 4.082e+02, threshold=3.682e+02, percent-clipped=3.0 2023-03-27 10:06:07,189 INFO [finetune.py:976] (6/7) Epoch 28, batch 3050, loss[loss=0.1445, simple_loss=0.2169, pruned_loss=0.03608, over 4886.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2445, pruned_loss=0.04851, over 956134.12 frames. ], batch size: 32, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:06:13,394 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 10:06:16,640 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157712.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:06:17,825 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157714.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:06:30,206 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.47 vs. limit=2.0 2023-03-27 10:06:33,279 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157738.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:06:40,234 INFO [finetune.py:976] (6/7) Epoch 28, batch 3100, loss[loss=0.1483, simple_loss=0.2231, pruned_loss=0.03679, over 4760.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2425, pruned_loss=0.04807, over 955203.80 frames. ], batch size: 27, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:06:48,819 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157760.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 10:06:57,051 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.389e+02 1.744e+02 2.105e+02 3.209e+02, threshold=3.488e+02, percent-clipped=0.0 2023-03-27 10:07:00,181 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5695, 3.9709, 4.1526, 4.3688, 4.3278, 4.0561, 4.6613, 1.5585], device='cuda:6'), covar=tensor([0.0794, 0.0832, 0.0731, 0.0963, 0.1172, 0.1508, 0.0608, 0.5485], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0246, 0.0285, 0.0294, 0.0335, 0.0285, 0.0304, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:07:19,511 INFO [finetune.py:976] (6/7) Epoch 28, batch 3150, loss[loss=0.1759, simple_loss=0.2381, pruned_loss=0.05682, over 4749.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2403, pruned_loss=0.04786, over 954239.54 frames. ], batch size: 27, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:07:20,084 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.3138, 4.6167, 4.8087, 5.1295, 5.0768, 4.7497, 5.3691, 1.7072], device='cuda:6'), covar=tensor([0.0491, 0.0813, 0.0716, 0.0623, 0.0811, 0.1433, 0.0462, 0.5437], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0284, 0.0293, 0.0334, 0.0284, 0.0304, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:07:20,105 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=157798.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:04,466 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=157846.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:05,667 INFO [finetune.py:976] (6/7) Epoch 28, batch 3200, loss[loss=0.1333, simple_loss=0.2087, pruned_loss=0.02895, over 4924.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2369, pruned_loss=0.04685, over 955596.98 frames. ], batch size: 38, lr: 2.88e-03, grad_scale: 16.0 2023-03-27 10:08:09,354 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=157853.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:15,275 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157862.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 10:08:22,848 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.817e+01 1.555e+02 1.831e+02 2.254e+02 7.078e+02, threshold=3.662e+02, percent-clipped=7.0 2023-03-27 10:08:38,480 INFO [finetune.py:976] (6/7) Epoch 28, batch 3250, loss[loss=0.1423, simple_loss=0.2172, pruned_loss=0.03375, over 4755.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2376, pruned_loss=0.04709, over 955671.65 frames. ], batch size: 27, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:08:49,843 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=157914.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:08:56,878 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=157924.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:09:02,252 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.4701, 1.5238, 1.5729, 1.0395, 1.6748, 1.8886, 1.9370, 1.4371], device='cuda:6'), covar=tensor([0.0948, 0.0657, 0.0580, 0.0488, 0.0495, 0.0696, 0.0297, 0.0760], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0130, 0.0122, 0.0132, 0.0130, 0.0142, 0.0150], device='cuda:6'), out_proj_covar=tensor([8.8312e-05, 1.0560e-04, 9.2028e-05, 8.5880e-05, 9.2361e-05, 9.2261e-05, 1.0063e-04, 1.0720e-04], device='cuda:6') 2023-03-27 10:09:05,781 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.4719, 3.9359, 4.1299, 4.2945, 4.1617, 3.9915, 4.5871, 1.5630], device='cuda:6'), covar=tensor([0.0854, 0.0940, 0.0780, 0.1088, 0.1398, 0.1503, 0.0668, 0.5700], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0246, 0.0286, 0.0295, 0.0337, 0.0286, 0.0306, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:09:11,707 INFO [finetune.py:976] (6/7) Epoch 28, batch 3300, loss[loss=0.1349, simple_loss=0.2139, pruned_loss=0.02795, over 4755.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2419, pruned_loss=0.04836, over 956622.82 frames. ], batch size: 27, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:09:18,274 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8732, 1.7096, 1.4687, 1.3682, 1.8622, 1.6175, 1.7947, 1.8540], device='cuda:6'), covar=tensor([0.1523, 0.1996, 0.3238, 0.2676, 0.2891, 0.1846, 0.3274, 0.1815], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0237, 0.0254, 0.0251, 0.0207, 0.0214, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:09:29,136 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.081e+02 1.521e+02 1.758e+02 2.140e+02 4.599e+02, threshold=3.515e+02, percent-clipped=1.0 2023-03-27 10:09:44,716 INFO [finetune.py:976] (6/7) Epoch 28, batch 3350, loss[loss=0.1795, simple_loss=0.2556, pruned_loss=0.05167, over 4798.00 frames. ], tot_loss[loss=0.1708, simple_loss=0.2434, pruned_loss=0.04913, over 954993.45 frames. ], batch size: 51, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:09:58,151 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158012.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:10:32,972 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158038.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:10:38,995 INFO [finetune.py:976] (6/7) Epoch 28, batch 3400, loss[loss=0.2242, simple_loss=0.2895, pruned_loss=0.07949, over 4714.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2448, pruned_loss=0.04973, over 955999.29 frames. ], batch size: 59, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:10:46,889 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158060.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:10:46,926 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158060.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:10:55,550 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.103e+02 1.550e+02 1.897e+02 2.350e+02 3.360e+02, threshold=3.793e+02, percent-clipped=0.0 2023-03-27 10:11:04,977 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158086.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:11:06,886 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9476, 1.7956, 1.5198, 1.4714, 1.9461, 1.6809, 1.8666, 1.9184], device='cuda:6'), covar=tensor([0.1369, 0.1877, 0.3129, 0.2449, 0.2561, 0.1797, 0.2777, 0.1739], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0253, 0.0250, 0.0206, 0.0213, 0.0202], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:11:12,077 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4984, 1.1090, 0.8406, 1.4384, 1.9896, 1.0587, 1.3558, 1.4814], device='cuda:6'), covar=tensor([0.2007, 0.2748, 0.2238, 0.1603, 0.2277, 0.2640, 0.1923, 0.2551], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0109, 0.0092, 0.0120, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:11:12,580 INFO [finetune.py:976] (6/7) Epoch 28, batch 3450, loss[loss=0.1602, simple_loss=0.2338, pruned_loss=0.04336, over 4905.00 frames. ], tot_loss[loss=0.1721, simple_loss=0.2453, pruned_loss=0.04944, over 955964.80 frames. ], batch size: 46, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:11:18,610 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158108.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:11:28,436 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158122.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:11:40,107 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5045, 1.4325, 1.3338, 1.4619, 1.7870, 1.7020, 1.5956, 1.3071], device='cuda:6'), covar=tensor([0.0355, 0.0319, 0.0637, 0.0304, 0.0235, 0.0527, 0.0286, 0.0478], device='cuda:6'), in_proj_covar=tensor([0.0101, 0.0105, 0.0147, 0.0111, 0.0101, 0.0115, 0.0103, 0.0113], device='cuda:6'), out_proj_covar=tensor([7.8087e-05, 8.0550e-05, 1.1438e-04, 8.4481e-05, 7.8204e-05, 8.4921e-05, 7.6585e-05, 8.5550e-05], device='cuda:6') 2023-03-27 10:11:46,010 INFO [finetune.py:976] (6/7) Epoch 28, batch 3500, loss[loss=0.1616, simple_loss=0.2213, pruned_loss=0.05099, over 3952.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2429, pruned_loss=0.04887, over 955955.10 frames. ], batch size: 17, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:11:55,062 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158162.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:12:02,513 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.432e+02 1.800e+02 2.074e+02 3.411e+02, threshold=3.600e+02, percent-clipped=0.0 2023-03-27 10:12:09,602 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158183.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:12:15,736 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:12:19,504 INFO [finetune.py:976] (6/7) Epoch 28, batch 3550, loss[loss=0.1739, simple_loss=0.2354, pruned_loss=0.05622, over 4820.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2396, pruned_loss=0.04801, over 956379.26 frames. ], batch size: 41, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:12:28,584 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=158209.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:12:29,208 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158210.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:12:38,720 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158224.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:13:02,524 INFO [finetune.py:976] (6/7) Epoch 28, batch 3600, loss[loss=0.176, simple_loss=0.2468, pruned_loss=0.05264, over 4876.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2379, pruned_loss=0.04746, over 956553.32 frames. ], batch size: 34, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:13:18,165 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:13:18,717 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.425e+02 1.679e+02 2.016e+02 3.584e+02, threshold=3.358e+02, percent-clipped=0.0 2023-03-27 10:13:22,888 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8093, 1.0782, 1.8179, 1.8686, 1.6578, 1.5993, 1.7338, 1.7283], device='cuda:6'), covar=tensor([0.3656, 0.3727, 0.2998, 0.3060, 0.4373, 0.3617, 0.4072, 0.2781], device='cuda:6'), in_proj_covar=tensor([0.0267, 0.0248, 0.0268, 0.0297, 0.0296, 0.0273, 0.0303, 0.0252], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:13:36,304 INFO [finetune.py:976] (6/7) Epoch 28, batch 3650, loss[loss=0.2378, simple_loss=0.2994, pruned_loss=0.08812, over 4808.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2412, pruned_loss=0.04919, over 954324.08 frames. ], batch size: 45, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:13:52,686 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6064, 1.2091, 0.9320, 1.6946, 1.9476, 1.5029, 1.5886, 1.5827], device='cuda:6'), covar=tensor([0.1477, 0.2042, 0.1788, 0.1102, 0.2065, 0.1980, 0.1344, 0.1869], device='cuda:6'), in_proj_covar=tensor([0.0091, 0.0095, 0.0110, 0.0093, 0.0121, 0.0093, 0.0099, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 10:14:10,062 INFO [finetune.py:976] (6/7) Epoch 28, batch 3700, loss[loss=0.2096, simple_loss=0.274, pruned_loss=0.07262, over 4815.00 frames. ], tot_loss[loss=0.1719, simple_loss=0.2442, pruned_loss=0.04983, over 954672.35 frames. ], batch size: 38, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:14:26,132 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.573e+02 1.954e+02 2.338e+02 5.991e+02, threshold=3.909e+02, percent-clipped=5.0 2023-03-27 10:14:33,460 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158384.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:14:43,249 INFO [finetune.py:976] (6/7) Epoch 28, batch 3750, loss[loss=0.2123, simple_loss=0.2805, pruned_loss=0.07204, over 4808.00 frames. ], tot_loss[loss=0.1732, simple_loss=0.2459, pruned_loss=0.05022, over 955300.68 frames. ], batch size: 51, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:15:03,250 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3505, 2.2110, 1.7932, 2.1204, 2.2914, 2.0519, 2.5170, 2.3111], device='cuda:6'), covar=tensor([0.1331, 0.1977, 0.2960, 0.2558, 0.2511, 0.1685, 0.2347, 0.1730], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0237, 0.0254, 0.0250, 0.0207, 0.0214, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:15:05,706 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.08 vs. limit=5.0 2023-03-27 10:15:08,694 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8254, 1.3554, 1.9610, 1.8916, 1.7090, 1.6398, 1.8148, 1.8508], device='cuda:6'), covar=tensor([0.3116, 0.3375, 0.2579, 0.3084, 0.3928, 0.3499, 0.3602, 0.2427], device='cuda:6'), in_proj_covar=tensor([0.0269, 0.0249, 0.0269, 0.0298, 0.0298, 0.0275, 0.0304, 0.0253], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:15:20,378 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158445.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:15:22,038 INFO [finetune.py:976] (6/7) Epoch 28, batch 3800, loss[loss=0.1106, simple_loss=0.1764, pruned_loss=0.0224, over 4112.00 frames. ], tot_loss[loss=0.1747, simple_loss=0.2475, pruned_loss=0.05096, over 951742.25 frames. ], batch size: 17, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:15:51,764 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.090e+02 1.580e+02 1.909e+02 2.258e+02 3.504e+02, threshold=3.818e+02, percent-clipped=0.0 2023-03-27 10:15:55,428 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=158478.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:16:08,375 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3369, 2.4738, 2.3519, 1.6668, 2.1116, 2.6807, 2.6733, 2.2000], device='cuda:6'), covar=tensor([0.0553, 0.0518, 0.0667, 0.0897, 0.1637, 0.0591, 0.0497, 0.0896], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0138, 0.0140, 0.0119, 0.0129, 0.0140, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:16:08,881 INFO [finetune.py:976] (6/7) Epoch 28, batch 3850, loss[loss=0.1515, simple_loss=0.2196, pruned_loss=0.04166, over 4742.00 frames. ], tot_loss[loss=0.1734, simple_loss=0.2461, pruned_loss=0.05034, over 952558.33 frames. ], batch size: 54, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:16:16,608 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158509.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:16:42,037 INFO [finetune.py:976] (6/7) Epoch 28, batch 3900, loss[loss=0.1331, simple_loss=0.2015, pruned_loss=0.0323, over 4750.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2432, pruned_loss=0.04944, over 953845.66 frames. ], batch size: 54, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:16:48,975 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158557.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:16:58,921 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.091e+02 1.423e+02 1.682e+02 2.011e+02 3.434e+02, threshold=3.365e+02, percent-clipped=0.0 2023-03-27 10:17:14,989 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7003, 1.2274, 0.9331, 1.6076, 2.1736, 1.3204, 1.5155, 1.5374], device='cuda:6'), covar=tensor([0.2101, 0.3006, 0.2359, 0.1735, 0.2217, 0.2508, 0.2013, 0.2934], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0092, 0.0120, 0.0092, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 10:17:15,468 INFO [finetune.py:976] (6/7) Epoch 28, batch 3950, loss[loss=0.1822, simple_loss=0.2632, pruned_loss=0.05059, over 4756.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2402, pruned_loss=0.0485, over 952844.84 frames. ], batch size: 27, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:17:48,877 INFO [finetune.py:976] (6/7) Epoch 28, batch 4000, loss[loss=0.181, simple_loss=0.2472, pruned_loss=0.05737, over 4896.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2395, pruned_loss=0.04886, over 952538.07 frames. ], batch size: 32, lr: 2.87e-03, grad_scale: 16.0 2023-03-27 10:18:15,599 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.131e+01 1.444e+02 1.852e+02 2.094e+02 7.470e+02, threshold=3.703e+02, percent-clipped=2.0 2023-03-27 10:18:32,167 INFO [finetune.py:976] (6/7) Epoch 28, batch 4050, loss[loss=0.1781, simple_loss=0.251, pruned_loss=0.05258, over 4818.00 frames. ], tot_loss[loss=0.1724, simple_loss=0.2441, pruned_loss=0.05036, over 955157.06 frames. ], batch size: 33, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:18:33,541 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0692, 1.7435, 2.1846, 2.1564, 1.8597, 1.8712, 2.0842, 1.9728], device='cuda:6'), covar=tensor([0.4457, 0.4355, 0.3065, 0.3856, 0.5004, 0.4401, 0.4904, 0.3149], device='cuda:6'), in_proj_covar=tensor([0.0267, 0.0248, 0.0267, 0.0296, 0.0295, 0.0273, 0.0302, 0.0251], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:18:59,957 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=158740.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:05,204 INFO [finetune.py:976] (6/7) Epoch 28, batch 4100, loss[loss=0.1584, simple_loss=0.2391, pruned_loss=0.0388, over 4823.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2457, pruned_loss=0.05042, over 955525.41 frames. ], batch size: 33, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:19:22,721 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.067e+02 1.534e+02 1.832e+02 2.269e+02 3.411e+02, threshold=3.665e+02, percent-clipped=0.0 2023-03-27 10:19:25,287 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158777.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:25,879 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=158778.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:29,400 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158783.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:19:32,471 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9155, 1.5343, 2.1421, 1.4917, 1.9082, 2.0536, 1.5030, 2.2020], device='cuda:6'), covar=tensor([0.1097, 0.2150, 0.1197, 0.1673, 0.0808, 0.1137, 0.2705, 0.0665], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0206, 0.0193, 0.0189, 0.0174, 0.0213, 0.0218, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:19:38,838 INFO [finetune.py:976] (6/7) Epoch 28, batch 4150, loss[loss=0.1929, simple_loss=0.2676, pruned_loss=0.05916, over 4828.00 frames. ], tot_loss[loss=0.1728, simple_loss=0.2455, pruned_loss=0.05003, over 954922.90 frames. ], batch size: 30, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:19:49,513 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1611, 2.1610, 2.0085, 2.3140, 2.5278, 2.3645, 2.0766, 1.6639], device='cuda:6'), covar=tensor([0.2091, 0.1734, 0.1628, 0.1538, 0.1816, 0.1001, 0.1966, 0.1836], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0200, 0.0246, 0.0191, 0.0218, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:19:58,160 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=158826.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:20:03,647 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=158835.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:20:05,948 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158838.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:20:09,557 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158844.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:20:11,901 INFO [finetune.py:976] (6/7) Epoch 28, batch 4200, loss[loss=0.1562, simple_loss=0.2199, pruned_loss=0.04622, over 4760.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.245, pruned_loss=0.0493, over 955077.43 frames. ], batch size: 23, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:20:26,973 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5564, 3.1159, 3.0874, 1.6473, 3.3303, 2.5258, 1.1746, 2.3575], device='cuda:6'), covar=tensor([0.2847, 0.2118, 0.1534, 0.2823, 0.1184, 0.1006, 0.3346, 0.1366], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0181, 0.0160, 0.0130, 0.0163, 0.0124, 0.0150, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 10:20:34,584 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.68 vs. limit=2.0 2023-03-27 10:20:35,370 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.763e+01 1.457e+02 1.627e+02 2.050e+02 3.601e+02, threshold=3.253e+02, percent-clipped=0.0 2023-03-27 10:21:03,468 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=158896.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:21:04,051 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7786, 1.8331, 2.6231, 2.1465, 2.0349, 4.5784, 1.8539, 2.0654], device='cuda:6'), covar=tensor([0.0988, 0.1754, 0.0955, 0.0966, 0.1604, 0.0218, 0.1501, 0.1803], device='cuda:6'), in_proj_covar=tensor([0.0076, 0.0083, 0.0073, 0.0076, 0.0092, 0.0080, 0.0086, 0.0081], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 10:21:04,571 INFO [finetune.py:976] (6/7) Epoch 28, batch 4250, loss[loss=0.1308, simple_loss=0.2054, pruned_loss=0.0281, over 4680.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2429, pruned_loss=0.04858, over 955227.00 frames. ], batch size: 23, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:21:35,803 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:21:46,470 INFO [finetune.py:976] (6/7) Epoch 28, batch 4300, loss[loss=0.1745, simple_loss=0.235, pruned_loss=0.05706, over 4827.00 frames. ], tot_loss[loss=0.1677, simple_loss=0.2401, pruned_loss=0.04768, over 955621.51 frames. ], batch size: 30, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:22:03,612 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.096e+02 1.473e+02 1.789e+02 2.045e+02 3.501e+02, threshold=3.577e+02, percent-clipped=2.0 2023-03-27 10:22:20,211 INFO [finetune.py:976] (6/7) Epoch 28, batch 4350, loss[loss=0.1798, simple_loss=0.2412, pruned_loss=0.05918, over 4813.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2367, pruned_loss=0.04716, over 954120.93 frames. ], batch size: 45, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:22:47,682 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0658, 5.0060, 4.7155, 2.8226, 5.1252, 3.8740, 1.0078, 3.5346], device='cuda:6'), covar=tensor([0.2297, 0.2033, 0.1492, 0.2966, 0.0776, 0.0847, 0.4649, 0.1387], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0182, 0.0161, 0.0131, 0.0164, 0.0125, 0.0151, 0.0127], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 10:22:48,325 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159040.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:22:53,108 INFO [finetune.py:976] (6/7) Epoch 28, batch 4400, loss[loss=0.1789, simple_loss=0.2503, pruned_loss=0.05381, over 4287.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2373, pruned_loss=0.04726, over 954759.37 frames. ], batch size: 65, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:22:53,810 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.5625, 5.2164, 4.9906, 3.2943, 5.2759, 4.1686, 1.2892, 3.9599], device='cuda:6'), covar=tensor([0.1743, 0.1484, 0.1249, 0.2458, 0.0541, 0.0678, 0.4206, 0.1073], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0182, 0.0161, 0.0131, 0.0164, 0.0125, 0.0151, 0.0127], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 10:23:07,435 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159069.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:23:09,713 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.014e+02 1.529e+02 1.726e+02 2.218e+02 4.795e+02, threshold=3.452e+02, percent-clipped=2.0 2023-03-27 10:23:24,716 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:23:35,365 INFO [finetune.py:976] (6/7) Epoch 28, batch 4450, loss[loss=0.1276, simple_loss=0.209, pruned_loss=0.02312, over 4747.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2394, pruned_loss=0.04746, over 953273.56 frames. ], batch size: 28, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:23:48,809 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.42 vs. limit=2.0 2023-03-27 10:23:52,527 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3158, 2.1662, 1.9377, 0.9810, 2.1054, 1.8158, 1.6604, 2.1836], device='cuda:6'), covar=tensor([0.1009, 0.0812, 0.1692, 0.2088, 0.1232, 0.2239, 0.2332, 0.0953], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0191, 0.0201, 0.0181, 0.0211, 0.0211, 0.0224, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:24:00,804 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159130.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:03,011 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159133.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:07,592 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159139.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:12,910 INFO [finetune.py:976] (6/7) Epoch 28, batch 4500, loss[loss=0.1727, simple_loss=0.2453, pruned_loss=0.05007, over 4810.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2417, pruned_loss=0.04849, over 952022.55 frames. ], batch size: 33, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:24:20,182 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159159.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:24:28,959 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.131e+02 1.553e+02 1.848e+02 2.152e+02 3.966e+02, threshold=3.696e+02, percent-clipped=2.0 2023-03-27 10:24:42,403 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159191.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:24:46,586 INFO [finetune.py:976] (6/7) Epoch 28, batch 4550, loss[loss=0.1797, simple_loss=0.2483, pruned_loss=0.05559, over 4915.00 frames. ], tot_loss[loss=0.171, simple_loss=0.2436, pruned_loss=0.04919, over 951482.56 frames. ], batch size: 33, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:24:57,946 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 10:25:00,525 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159220.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:25:13,532 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 10:25:20,097 INFO [finetune.py:976] (6/7) Epoch 28, batch 4600, loss[loss=0.1497, simple_loss=0.2273, pruned_loss=0.03603, over 4815.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2427, pruned_loss=0.04871, over 951341.76 frames. ], batch size: 41, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:25:35,683 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.005e+02 1.454e+02 1.668e+02 2.062e+02 3.965e+02, threshold=3.336e+02, percent-clipped=3.0 2023-03-27 10:25:39,737 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6620, 1.1701, 0.9314, 1.5894, 2.0594, 1.2691, 1.4570, 1.5942], device='cuda:6'), covar=tensor([0.1447, 0.2057, 0.1782, 0.1122, 0.1803, 0.2084, 0.1363, 0.1805], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0120, 0.0093, 0.0098, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 10:25:54,177 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159290.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:26:05,019 INFO [finetune.py:976] (6/7) Epoch 28, batch 4650, loss[loss=0.1273, simple_loss=0.2031, pruned_loss=0.02577, over 4899.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2405, pruned_loss=0.04807, over 951320.59 frames. ], batch size: 43, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:26:24,488 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4283, 2.3702, 2.1449, 2.5353, 2.3118, 2.3181, 2.3697, 3.1926], device='cuda:6'), covar=tensor([0.3704, 0.4573, 0.3122, 0.4148, 0.4128, 0.2472, 0.4040, 0.1682], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0264, 0.0237, 0.0274, 0.0260, 0.0230, 0.0258, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:26:55,601 INFO [finetune.py:976] (6/7) Epoch 28, batch 4700, loss[loss=0.145, simple_loss=0.2197, pruned_loss=0.03517, over 4799.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2385, pruned_loss=0.04717, over 953496.96 frames. ], batch size: 26, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:26:58,043 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159351.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:11,660 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.356e+01 1.407e+02 1.592e+02 1.978e+02 3.098e+02, threshold=3.183e+02, percent-clipped=0.0 2023-03-27 10:27:28,645 INFO [finetune.py:976] (6/7) Epoch 28, batch 4750, loss[loss=0.1803, simple_loss=0.257, pruned_loss=0.05177, over 4851.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2386, pruned_loss=0.04797, over 953511.87 frames. ], batch size: 44, lr: 2.87e-03, grad_scale: 32.0 2023-03-27 10:27:39,466 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:27:46,458 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159425.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:51,846 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159433.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:55,936 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159439.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:27:58,368 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1136, 2.0532, 2.1680, 1.6305, 1.9877, 2.2367, 2.3297, 1.7005], device='cuda:6'), covar=tensor([0.0615, 0.0606, 0.0641, 0.0787, 0.0755, 0.0691, 0.0564, 0.1195], device='cuda:6'), in_proj_covar=tensor([0.0133, 0.0139, 0.0142, 0.0120, 0.0130, 0.0141, 0.0142, 0.0165], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:28:02,250 INFO [finetune.py:976] (6/7) Epoch 28, batch 4800, loss[loss=0.1577, simple_loss=0.2171, pruned_loss=0.04913, over 4526.00 frames. ], tot_loss[loss=0.1681, simple_loss=0.2399, pruned_loss=0.04813, over 953471.92 frames. ], batch size: 20, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:28:16,686 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7867, 1.1982, 1.8729, 1.8378, 1.6400, 1.5665, 1.7186, 1.7782], device='cuda:6'), covar=tensor([0.3766, 0.3893, 0.3173, 0.3420, 0.4748, 0.3791, 0.4353, 0.2966], device='cuda:6'), in_proj_covar=tensor([0.0269, 0.0248, 0.0270, 0.0298, 0.0297, 0.0275, 0.0303, 0.0253], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:28:18,793 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.120e+02 1.565e+02 1.779e+02 2.195e+02 3.956e+02, threshold=3.558e+02, percent-clipped=1.0 2023-03-27 10:28:23,705 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159481.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:28:27,818 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159487.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:28:30,771 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159491.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 10:28:36,823 INFO [finetune.py:976] (6/7) Epoch 28, batch 4850, loss[loss=0.1574, simple_loss=0.2307, pruned_loss=0.04201, over 4762.00 frames. ], tot_loss[loss=0.1706, simple_loss=0.2434, pruned_loss=0.04887, over 954985.08 frames. ], batch size: 26, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:28:57,890 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159515.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:29:17,409 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159539.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:29:23,228 INFO [finetune.py:976] (6/7) Epoch 28, batch 4900, loss[loss=0.1703, simple_loss=0.2483, pruned_loss=0.04621, over 4886.00 frames. ], tot_loss[loss=0.172, simple_loss=0.2453, pruned_loss=0.04931, over 955647.80 frames. ], batch size: 35, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:29:40,325 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.553e+02 1.825e+02 2.271e+02 5.584e+02, threshold=3.651e+02, percent-clipped=3.0 2023-03-27 10:29:56,963 INFO [finetune.py:976] (6/7) Epoch 28, batch 4950, loss[loss=0.1276, simple_loss=0.2024, pruned_loss=0.0264, over 4801.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2452, pruned_loss=0.04901, over 954415.80 frames. ], batch size: 25, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:30:18,761 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159631.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:30:28,814 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159646.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:30:29,931 INFO [finetune.py:976] (6/7) Epoch 28, batch 5000, loss[loss=0.2048, simple_loss=0.2645, pruned_loss=0.07258, over 4728.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2428, pruned_loss=0.04827, over 952648.16 frames. ], batch size: 23, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:30:42,192 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1132, 1.8089, 2.2177, 2.2197, 1.9318, 1.9410, 2.1536, 2.0892], device='cuda:6'), covar=tensor([0.4262, 0.4300, 0.3005, 0.3930, 0.5192, 0.4095, 0.4733, 0.3072], device='cuda:6'), in_proj_covar=tensor([0.0269, 0.0249, 0.0270, 0.0299, 0.0298, 0.0275, 0.0304, 0.0254], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:30:47,442 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.023e+02 1.490e+02 1.734e+02 1.957e+02 3.436e+02, threshold=3.469e+02, percent-clipped=0.0 2023-03-27 10:30:55,871 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8911, 3.9236, 3.7551, 1.8281, 4.0444, 3.0269, 0.9458, 2.8408], device='cuda:6'), covar=tensor([0.2208, 0.2048, 0.1379, 0.3261, 0.1035, 0.1033, 0.4361, 0.1458], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0181, 0.0161, 0.0131, 0.0164, 0.0125, 0.0150, 0.0127], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 10:30:59,466 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159692.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:31:05,616 INFO [finetune.py:976] (6/7) Epoch 28, batch 5050, loss[loss=0.1637, simple_loss=0.2261, pruned_loss=0.05064, over 4241.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2406, pruned_loss=0.04774, over 954747.89 frames. ], batch size: 65, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:31:32,284 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159725.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:31:55,163 INFO [finetune.py:976] (6/7) Epoch 28, batch 5100, loss[loss=0.2111, simple_loss=0.2756, pruned_loss=0.07328, over 4906.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.237, pruned_loss=0.04641, over 953548.93 frames. ], batch size: 37, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:32:06,480 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2907, 2.2615, 2.3531, 1.8004, 2.2289, 2.4691, 2.6019, 1.9503], device='cuda:6'), covar=tensor([0.0604, 0.0653, 0.0655, 0.0764, 0.0704, 0.0660, 0.0543, 0.1073], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0119, 0.0129, 0.0140, 0.0140, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:32:21,763 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.008e+02 1.553e+02 1.866e+02 2.180e+02 3.771e+02, threshold=3.731e+02, percent-clipped=1.0 2023-03-27 10:32:21,833 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159773.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:32:37,624 INFO [finetune.py:976] (6/7) Epoch 28, batch 5150, loss[loss=0.1899, simple_loss=0.2683, pruned_loss=0.05571, over 4813.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2382, pruned_loss=0.04735, over 953901.04 frames. ], batch size: 38, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:32:41,526 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.52 vs. limit=2.0 2023-03-27 10:32:44,563 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7966, 1.3285, 1.8976, 1.8309, 1.6407, 1.6237, 1.7522, 1.7744], device='cuda:6'), covar=tensor([0.3946, 0.3789, 0.2895, 0.3393, 0.4129, 0.3420, 0.4005, 0.2727], device='cuda:6'), in_proj_covar=tensor([0.0268, 0.0248, 0.0269, 0.0298, 0.0297, 0.0274, 0.0302, 0.0253], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:32:49,248 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159815.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:11,652 INFO [finetune.py:976] (6/7) Epoch 28, batch 5200, loss[loss=0.1758, simple_loss=0.2537, pruned_loss=0.0489, over 4914.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2417, pruned_loss=0.04878, over 955201.73 frames. ], batch size: 36, lr: 2.86e-03, grad_scale: 32.0 2023-03-27 10:33:16,481 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159855.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:21,712 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159863.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:28,751 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.499e+02 1.871e+02 2.174e+02 4.313e+02, threshold=3.743e+02, percent-clipped=1.0 2023-03-27 10:33:31,457 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 10:33:42,334 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-27 10:33:44,909 INFO [finetune.py:976] (6/7) Epoch 28, batch 5250, loss[loss=0.1896, simple_loss=0.2665, pruned_loss=0.05637, over 4915.00 frames. ], tot_loss[loss=0.1703, simple_loss=0.2433, pruned_loss=0.04868, over 955789.52 frames. ], batch size: 42, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:33:53,321 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=159910.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:33:59,361 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159916.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:34:26,202 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=159946.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:34:27,321 INFO [finetune.py:976] (6/7) Epoch 28, batch 5300, loss[loss=0.1811, simple_loss=0.2599, pruned_loss=0.05111, over 4856.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2449, pruned_loss=0.04911, over 957098.36 frames. ], batch size: 44, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:34:50,633 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=159971.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:34:52,304 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.066e+02 1.498e+02 1.765e+02 2.107e+02 3.764e+02, threshold=3.530e+02, percent-clipped=1.0 2023-03-27 10:34:56,501 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8166, 1.2828, 1.9081, 1.8265, 1.7024, 1.6416, 1.7885, 1.8043], device='cuda:6'), covar=tensor([0.4064, 0.3546, 0.2904, 0.3405, 0.4143, 0.3520, 0.3997, 0.2620], device='cuda:6'), in_proj_covar=tensor([0.0267, 0.0247, 0.0268, 0.0297, 0.0296, 0.0273, 0.0302, 0.0252], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:35:01,760 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=159987.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:35:06,018 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=159994.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:35:08,413 INFO [finetune.py:976] (6/7) Epoch 28, batch 5350, loss[loss=0.1559, simple_loss=0.2285, pruned_loss=0.04161, over 4815.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2439, pruned_loss=0.04818, over 956026.63 frames. ], batch size: 39, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:35:43,145 INFO [finetune.py:976] (6/7) Epoch 28, batch 5400, loss[loss=0.1499, simple_loss=0.2264, pruned_loss=0.03669, over 4755.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2417, pruned_loss=0.04738, over 956793.92 frames. ], batch size: 27, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:36:00,196 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.280e+01 1.494e+02 1.877e+02 2.215e+02 3.734e+02, threshold=3.755e+02, percent-clipped=1.0 2023-03-27 10:36:16,672 INFO [finetune.py:976] (6/7) Epoch 28, batch 5450, loss[loss=0.1661, simple_loss=0.2282, pruned_loss=0.05196, over 4732.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2394, pruned_loss=0.04685, over 955210.27 frames. ], batch size: 54, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:37:07,793 INFO [finetune.py:976] (6/7) Epoch 28, batch 5500, loss[loss=0.1995, simple_loss=0.2727, pruned_loss=0.0632, over 4914.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2367, pruned_loss=0.04597, over 956753.05 frames. ], batch size: 36, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:37:28,648 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.65 vs. limit=2.0 2023-03-27 10:37:38,236 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.035e+02 1.497e+02 1.808e+02 2.088e+02 3.346e+02, threshold=3.616e+02, percent-clipped=0.0 2023-03-27 10:37:52,764 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4819, 1.8570, 2.7137, 1.6718, 2.3891, 2.6135, 1.7241, 2.6568], device='cuda:6'), covar=tensor([0.1147, 0.2093, 0.1483, 0.1947, 0.0854, 0.1308, 0.2885, 0.0877], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0203, 0.0190, 0.0186, 0.0172, 0.0211, 0.0215, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:37:55,162 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3847, 1.4529, 1.9655, 1.7117, 1.4755, 3.1834, 1.2958, 1.5571], device='cuda:6'), covar=tensor([0.0984, 0.1798, 0.1111, 0.0943, 0.1643, 0.0244, 0.1492, 0.1701], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0080, 0.0086, 0.0081], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 10:37:55,667 INFO [finetune.py:976] (6/7) Epoch 28, batch 5550, loss[loss=0.2299, simple_loss=0.2933, pruned_loss=0.08325, over 4830.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2394, pruned_loss=0.0476, over 956651.70 frames. ], batch size: 49, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:38:03,851 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=160211.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:38:16,408 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6537, 1.6143, 1.4050, 1.7876, 1.5532, 1.7362, 1.0321, 1.4306], device='cuda:6'), covar=tensor([0.2288, 0.2062, 0.2068, 0.1662, 0.1765, 0.1241, 0.2627, 0.1926], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0212, 0.0215, 0.0199, 0.0246, 0.0191, 0.0217, 0.0205], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:38:17,837 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 10:38:27,206 INFO [finetune.py:976] (6/7) Epoch 28, batch 5600, loss[loss=0.1702, simple_loss=0.246, pruned_loss=0.04722, over 4756.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2431, pruned_loss=0.04887, over 956250.33 frames. ], batch size: 26, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:38:32,122 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.76 vs. limit=2.0 2023-03-27 10:38:37,632 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=160266.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:38:42,225 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.097e+02 1.448e+02 1.891e+02 2.366e+02 4.690e+02, threshold=3.782e+02, percent-clipped=2.0 2023-03-27 10:38:49,860 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160287.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:38:51,653 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0119, 1.9076, 1.5981, 1.8986, 1.9931, 1.7139, 2.1871, 2.0009], device='cuda:6'), covar=tensor([0.1236, 0.1795, 0.2630, 0.2167, 0.2238, 0.1562, 0.2482, 0.1526], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0189, 0.0235, 0.0251, 0.0249, 0.0206, 0.0214, 0.0201], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:38:56,598 INFO [finetune.py:976] (6/7) Epoch 28, batch 5650, loss[loss=0.2056, simple_loss=0.2725, pruned_loss=0.06936, over 4885.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2452, pruned_loss=0.04879, over 957152.31 frames. ], batch size: 32, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:39:01,994 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=160307.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:39:24,571 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=160335.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:39:35,803 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2076, 1.7255, 2.2115, 2.2574, 2.0040, 1.9930, 2.1417, 2.1387], device='cuda:6'), covar=tensor([0.3845, 0.3964, 0.3503, 0.3803, 0.5294, 0.4029, 0.4705, 0.3110], device='cuda:6'), in_proj_covar=tensor([0.0269, 0.0248, 0.0269, 0.0298, 0.0298, 0.0275, 0.0303, 0.0253], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:39:36,255 INFO [finetune.py:976] (6/7) Epoch 28, batch 5700, loss[loss=0.1405, simple_loss=0.2138, pruned_loss=0.03356, over 4130.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2404, pruned_loss=0.04783, over 938114.17 frames. ], batch size: 18, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:39:48,022 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=160368.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:39:53,235 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.932e+01 1.353e+02 1.695e+02 2.046e+02 5.031e+02, threshold=3.390e+02, percent-clipped=3.0 2023-03-27 10:40:10,862 INFO [finetune.py:976] (6/7) Epoch 29, batch 0, loss[loss=0.1646, simple_loss=0.2401, pruned_loss=0.04455, over 4833.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2401, pruned_loss=0.04455, over 4833.00 frames. ], batch size: 47, lr: 2.86e-03, grad_scale: 16.0 2023-03-27 10:40:10,862 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 10:40:21,878 INFO [finetune.py:1010] (6/7) Epoch 29, validation: loss=0.1588, simple_loss=0.2262, pruned_loss=0.04569, over 2265189.00 frames. 2023-03-27 10:40:21,879 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 10:40:24,832 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7337, 1.5562, 2.2616, 3.4889, 2.4299, 2.4013, 1.0356, 3.0132], device='cuda:6'), covar=tensor([0.1583, 0.1283, 0.1193, 0.0493, 0.0707, 0.1508, 0.1723, 0.0399], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0134, 0.0165, 0.0101, 0.0136, 0.0125, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:40:57,960 INFO [finetune.py:976] (6/7) Epoch 29, batch 50, loss[loss=0.1882, simple_loss=0.2591, pruned_loss=0.05867, over 4862.00 frames. ], tot_loss[loss=0.1731, simple_loss=0.2472, pruned_loss=0.04952, over 215009.69 frames. ], batch size: 34, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:41:07,359 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 10:41:17,664 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.82 vs. limit=5.0 2023-03-27 10:41:23,861 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.11 vs. limit=2.0 2023-03-27 10:41:38,862 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.219e+02 1.505e+02 1.852e+02 2.145e+02 8.823e+02, threshold=3.704e+02, percent-clipped=1.0 2023-03-27 10:41:39,934 INFO [finetune.py:976] (6/7) Epoch 29, batch 100, loss[loss=0.1613, simple_loss=0.237, pruned_loss=0.04282, over 4713.00 frames. ], tot_loss[loss=0.1682, simple_loss=0.2409, pruned_loss=0.04777, over 379485.56 frames. ], batch size: 54, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:42:14,767 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:42:34,272 INFO [finetune.py:976] (6/7) Epoch 29, batch 150, loss[loss=0.1765, simple_loss=0.2504, pruned_loss=0.05128, over 4817.00 frames. ], tot_loss[loss=0.1635, simple_loss=0.2348, pruned_loss=0.04611, over 505685.02 frames. ], batch size: 30, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:42:43,905 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3478, 2.2496, 1.7355, 2.3296, 2.2922, 2.0035, 2.5950, 2.3171], device='cuda:6'), covar=tensor([0.1297, 0.1908, 0.3066, 0.2388, 0.2530, 0.1742, 0.2799, 0.1691], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0191, 0.0238, 0.0254, 0.0251, 0.0209, 0.0217, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:42:55,949 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=160559.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:43:00,754 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160566.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:43:06,535 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.027e+02 1.446e+02 1.774e+02 2.135e+02 3.412e+02, threshold=3.547e+02, percent-clipped=0.0 2023-03-27 10:43:07,128 INFO [finetune.py:976] (6/7) Epoch 29, batch 200, loss[loss=0.1828, simple_loss=0.243, pruned_loss=0.06129, over 4869.00 frames. ], tot_loss[loss=0.161, simple_loss=0.2324, pruned_loss=0.04486, over 607175.26 frames. ], batch size: 31, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:43:12,401 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5641, 1.4899, 2.1426, 3.1552, 2.1397, 2.2519, 1.0273, 2.7013], device='cuda:6'), covar=tensor([0.1751, 0.1399, 0.1178, 0.0573, 0.0787, 0.1379, 0.1738, 0.0502], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0165, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:43:32,505 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=160614.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:43:40,982 INFO [finetune.py:976] (6/7) Epoch 29, batch 250, loss[loss=0.1289, simple_loss=0.1956, pruned_loss=0.03112, over 4389.00 frames. ], tot_loss[loss=0.1643, simple_loss=0.2368, pruned_loss=0.04593, over 686491.82 frames. ], batch size: 19, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:44:05,537 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=160663.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 10:44:12,918 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.082e+02 1.455e+02 1.868e+02 2.134e+02 3.558e+02, threshold=3.736e+02, percent-clipped=1.0 2023-03-27 10:44:13,967 INFO [finetune.py:976] (6/7) Epoch 29, batch 300, loss[loss=0.1224, simple_loss=0.1968, pruned_loss=0.02401, over 4722.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2415, pruned_loss=0.04773, over 747070.62 frames. ], batch size: 23, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:44:26,385 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 10:44:36,655 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.22 vs. limit=2.0 2023-03-27 10:44:57,145 INFO [finetune.py:976] (6/7) Epoch 29, batch 350, loss[loss=0.1842, simple_loss=0.2562, pruned_loss=0.05612, over 4875.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.2452, pruned_loss=0.04904, over 793422.85 frames. ], batch size: 34, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:45:20,174 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0621, 2.0030, 2.1610, 1.4323, 1.9908, 2.1828, 2.2218, 1.7480], device='cuda:6'), covar=tensor([0.0601, 0.0676, 0.0613, 0.0868, 0.0719, 0.0758, 0.0566, 0.1164], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0138, 0.0140, 0.0119, 0.0128, 0.0140, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:45:32,640 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.36 vs. limit=2.0 2023-03-27 10:45:37,464 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.928e+01 1.581e+02 1.864e+02 2.210e+02 3.251e+02, threshold=3.728e+02, percent-clipped=0.0 2023-03-27 10:45:38,107 INFO [finetune.py:976] (6/7) Epoch 29, batch 400, loss[loss=0.184, simple_loss=0.2686, pruned_loss=0.04976, over 4812.00 frames. ], tot_loss[loss=0.1733, simple_loss=0.2473, pruned_loss=0.04963, over 830139.25 frames. ], batch size: 33, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:45:59,570 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.1817, 1.2923, 1.3822, 0.6696, 1.3193, 1.5743, 1.5850, 1.2845], device='cuda:6'), covar=tensor([0.1007, 0.0588, 0.0519, 0.0540, 0.0496, 0.0584, 0.0369, 0.0696], device='cuda:6'), in_proj_covar=tensor([0.0120, 0.0148, 0.0130, 0.0122, 0.0131, 0.0130, 0.0142, 0.0151], device='cuda:6'), out_proj_covar=tensor([8.7830e-05, 1.0579e-04, 9.2256e-05, 8.5358e-05, 9.1659e-05, 9.1751e-05, 1.0126e-04, 1.0756e-04], device='cuda:6') 2023-03-27 10:46:05,498 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9083, 1.8251, 1.7456, 1.9821, 1.8226, 4.4444, 1.7493, 2.2152], device='cuda:6'), covar=tensor([0.3215, 0.2543, 0.2008, 0.2167, 0.1385, 0.0141, 0.2353, 0.1173], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0093, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 10:46:11,857 INFO [finetune.py:976] (6/7) Epoch 29, batch 450, loss[loss=0.1667, simple_loss=0.242, pruned_loss=0.04569, over 4866.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2453, pruned_loss=0.04898, over 858656.79 frames. ], batch size: 34, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:46:39,410 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9906, 3.9620, 3.7757, 2.0957, 4.1251, 3.2734, 1.4172, 2.8446], device='cuda:6'), covar=tensor([0.2219, 0.2061, 0.1485, 0.3110, 0.0919, 0.0894, 0.3768, 0.1566], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0180, 0.0160, 0.0130, 0.0163, 0.0124, 0.0150, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 10:46:55,071 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.473e+02 1.674e+02 2.082e+02 4.783e+02, threshold=3.348e+02, percent-clipped=2.0 2023-03-27 10:46:55,692 INFO [finetune.py:976] (6/7) Epoch 29, batch 500, loss[loss=0.1795, simple_loss=0.2378, pruned_loss=0.06062, over 4867.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2425, pruned_loss=0.04851, over 878073.55 frames. ], batch size: 31, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:47:03,660 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 10:47:15,115 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=160901.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:47:36,924 INFO [finetune.py:976] (6/7) Epoch 29, batch 550, loss[loss=0.1541, simple_loss=0.2229, pruned_loss=0.04264, over 4819.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2386, pruned_loss=0.0474, over 896011.33 frames. ], batch size: 41, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:48:08,208 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=160962.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:48:08,821 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=160963.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:48:10,051 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=160965.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:48:11,242 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5416, 1.7149, 1.5000, 1.5174, 2.0413, 2.0988, 1.8197, 1.7328], device='cuda:6'), covar=tensor([0.0521, 0.0339, 0.0613, 0.0381, 0.0348, 0.0558, 0.0366, 0.0405], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0106, 0.0148, 0.0111, 0.0102, 0.0117, 0.0105, 0.0115], device='cuda:6'), out_proj_covar=tensor([7.8707e-05, 8.1178e-05, 1.1546e-04, 8.4904e-05, 7.8574e-05, 8.5927e-05, 7.7588e-05, 8.6940e-05], device='cuda:6') 2023-03-27 10:48:15,387 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.000e+01 1.387e+02 1.769e+02 2.082e+02 3.377e+02, threshold=3.538e+02, percent-clipped=1.0 2023-03-27 10:48:16,013 INFO [finetune.py:976] (6/7) Epoch 29, batch 600, loss[loss=0.134, simple_loss=0.2057, pruned_loss=0.03114, over 4907.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2385, pruned_loss=0.0472, over 910259.54 frames. ], batch size: 36, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:48:41,023 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161011.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 10:48:41,641 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9162, 1.6282, 2.3144, 1.5473, 1.9335, 2.1265, 1.5463, 2.2834], device='cuda:6'), covar=tensor([0.1178, 0.1938, 0.1488, 0.1910, 0.0909, 0.1330, 0.2766, 0.0814], device='cuda:6'), in_proj_covar=tensor([0.0192, 0.0206, 0.0193, 0.0190, 0.0174, 0.0214, 0.0219, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:48:44,050 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8276, 1.7029, 1.9004, 1.3873, 1.7157, 1.9527, 1.9775, 1.4944], device='cuda:6'), covar=tensor([0.0527, 0.0605, 0.0536, 0.0779, 0.1051, 0.0599, 0.0489, 0.1081], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0119, 0.0128, 0.0141, 0.0140, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:48:49,430 INFO [finetune.py:976] (6/7) Epoch 29, batch 650, loss[loss=0.1867, simple_loss=0.2603, pruned_loss=0.05651, over 4903.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2425, pruned_loss=0.04857, over 920082.18 frames. ], batch size: 43, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:48:50,193 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161026.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:48:57,675 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 10:49:22,504 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.055e+02 1.587e+02 1.904e+02 2.267e+02 4.706e+02, threshold=3.807e+02, percent-clipped=2.0 2023-03-27 10:49:23,103 INFO [finetune.py:976] (6/7) Epoch 29, batch 700, loss[loss=0.207, simple_loss=0.2982, pruned_loss=0.05792, over 4885.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2456, pruned_loss=0.04899, over 929325.17 frames. ], batch size: 35, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:50:03,476 INFO [finetune.py:976] (6/7) Epoch 29, batch 750, loss[loss=0.2035, simple_loss=0.2748, pruned_loss=0.06615, over 4816.00 frames. ], tot_loss[loss=0.1736, simple_loss=0.2475, pruned_loss=0.04982, over 936149.93 frames. ], batch size: 38, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:50:05,374 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3312, 1.5103, 1.2764, 1.4159, 1.6492, 1.6874, 1.4908, 1.4452], device='cuda:6'), covar=tensor([0.0474, 0.0302, 0.0604, 0.0322, 0.0291, 0.0423, 0.0374, 0.0379], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0107, 0.0150, 0.0113, 0.0103, 0.0118, 0.0105, 0.0116], device='cuda:6'), out_proj_covar=tensor([7.9837e-05, 8.2037e-05, 1.1639e-04, 8.5855e-05, 7.9271e-05, 8.6834e-05, 7.8120e-05, 8.7713e-05], device='cuda:6') 2023-03-27 10:50:46,353 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.613e+01 1.645e+02 1.965e+02 2.346e+02 5.118e+02, threshold=3.931e+02, percent-clipped=1.0 2023-03-27 10:50:46,988 INFO [finetune.py:976] (6/7) Epoch 29, batch 800, loss[loss=0.2284, simple_loss=0.2763, pruned_loss=0.09025, over 4892.00 frames. ], tot_loss[loss=0.173, simple_loss=0.2467, pruned_loss=0.0496, over 940370.63 frames. ], batch size: 35, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:51:15,285 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161216.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:51:20,580 INFO [finetune.py:976] (6/7) Epoch 29, batch 850, loss[loss=0.1721, simple_loss=0.2533, pruned_loss=0.04542, over 4867.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2444, pruned_loss=0.04915, over 941997.74 frames. ], batch size: 34, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:51:24,328 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161231.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:51:38,270 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1134, 1.8245, 2.4945, 3.5277, 2.4512, 2.6709, 1.4697, 2.8857], device='cuda:6'), covar=tensor([0.1301, 0.1192, 0.0999, 0.0510, 0.0670, 0.2110, 0.1346, 0.0435], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0136, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:51:47,358 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.11 vs. limit=5.0 2023-03-27 10:51:48,942 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161257.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:04,263 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.030e+02 1.396e+02 1.676e+02 2.040e+02 3.102e+02, threshold=3.353e+02, percent-clipped=0.0 2023-03-27 10:52:04,921 INFO [finetune.py:976] (6/7) Epoch 29, batch 900, loss[loss=0.1801, simple_loss=0.2382, pruned_loss=0.06101, over 4903.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.242, pruned_loss=0.0484, over 944490.60 frames. ], batch size: 35, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:52:06,262 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161277.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:15,376 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161292.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:21,280 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.5391, 3.0466, 2.8863, 1.4437, 3.0100, 2.5160, 2.4383, 2.7758], device='cuda:6'), covar=tensor([0.0740, 0.0977, 0.1616, 0.2220, 0.1573, 0.2200, 0.2027, 0.1078], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0188, 0.0200, 0.0179, 0.0207, 0.0209, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:52:35,816 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161321.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:52:36,456 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0729, 2.7082, 2.6194, 1.3823, 2.7223, 2.2062, 2.1430, 2.6429], device='cuda:6'), covar=tensor([0.1043, 0.0836, 0.1694, 0.2139, 0.1598, 0.2372, 0.2055, 0.1099], device='cuda:6'), in_proj_covar=tensor([0.0169, 0.0187, 0.0199, 0.0178, 0.0206, 0.0207, 0.0221, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:52:38,181 INFO [finetune.py:976] (6/7) Epoch 29, batch 950, loss[loss=0.1476, simple_loss=0.2198, pruned_loss=0.03771, over 4818.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.241, pruned_loss=0.04813, over 947488.92 frames. ], batch size: 25, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:52:48,837 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2233, 1.6237, 0.7535, 2.1350, 2.5850, 1.8634, 1.8184, 2.0322], device='cuda:6'), covar=tensor([0.1454, 0.1984, 0.2100, 0.1119, 0.1786, 0.1773, 0.1400, 0.2078], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 10:53:28,290 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.815e+01 1.511e+02 1.765e+02 2.170e+02 3.612e+02, threshold=3.529e+02, percent-clipped=1.0 2023-03-27 10:53:28,922 INFO [finetune.py:976] (6/7) Epoch 29, batch 1000, loss[loss=0.1783, simple_loss=0.2522, pruned_loss=0.0522, over 4828.00 frames. ], tot_loss[loss=0.1687, simple_loss=0.2415, pruned_loss=0.048, over 951249.11 frames. ], batch size: 30, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:53:49,432 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161401.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:54:06,939 INFO [finetune.py:976] (6/7) Epoch 29, batch 1050, loss[loss=0.1742, simple_loss=0.2475, pruned_loss=0.05048, over 4745.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.243, pruned_loss=0.04817, over 952900.57 frames. ], batch size: 54, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:54:28,548 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161460.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:54:30,803 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161462.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:54:39,305 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.696e+01 1.443e+02 1.825e+02 2.108e+02 3.139e+02, threshold=3.650e+02, percent-clipped=0.0 2023-03-27 10:54:39,918 INFO [finetune.py:976] (6/7) Epoch 29, batch 1100, loss[loss=0.1385, simple_loss=0.2164, pruned_loss=0.03034, over 4873.00 frames. ], tot_loss[loss=0.1704, simple_loss=0.2439, pruned_loss=0.0485, over 954084.43 frames. ], batch size: 34, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:54:54,600 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 10:55:11,580 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161521.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:55:14,357 INFO [finetune.py:976] (6/7) Epoch 29, batch 1150, loss[loss=0.1531, simple_loss=0.2389, pruned_loss=0.03365, over 4831.00 frames. ], tot_loss[loss=0.1711, simple_loss=0.2448, pruned_loss=0.04872, over 952541.50 frames. ], batch size: 30, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:55:15,102 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2717, 2.2619, 1.7652, 2.3975, 2.8102, 2.2756, 2.2685, 1.6881], device='cuda:6'), covar=tensor([0.2027, 0.1732, 0.1847, 0.1431, 0.1548, 0.1084, 0.1837, 0.1732], device='cuda:6'), in_proj_covar=tensor([0.0248, 0.0214, 0.0217, 0.0201, 0.0248, 0.0193, 0.0218, 0.0207], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:55:36,268 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2351, 2.1142, 1.8715, 2.0405, 2.0335, 2.0266, 2.0172, 2.7967], device='cuda:6'), covar=tensor([0.3639, 0.4175, 0.3180, 0.3478, 0.4226, 0.2316, 0.3502, 0.1530], device='cuda:6'), in_proj_covar=tensor([0.0287, 0.0262, 0.0237, 0.0274, 0.0259, 0.0230, 0.0257, 0.0237], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:55:39,801 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161557.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:55:50,717 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161572.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:55:51,852 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.769e+01 1.420e+02 1.779e+02 2.194e+02 3.095e+02, threshold=3.558e+02, percent-clipped=0.0 2023-03-27 10:55:52,954 INFO [finetune.py:976] (6/7) Epoch 29, batch 1200, loss[loss=0.1418, simple_loss=0.2094, pruned_loss=0.03711, over 4092.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2431, pruned_loss=0.04837, over 952754.09 frames. ], batch size: 17, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:55:59,995 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8283, 1.4107, 0.7005, 1.7486, 2.2503, 1.4653, 1.5363, 1.7061], device='cuda:6'), covar=tensor([0.1516, 0.2222, 0.2176, 0.1272, 0.1800, 0.2037, 0.1461, 0.2077], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0119, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 10:56:03,031 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161587.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:56:22,442 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161605.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:56:33,591 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161621.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:56:36,405 INFO [finetune.py:976] (6/7) Epoch 29, batch 1250, loss[loss=0.1961, simple_loss=0.2599, pruned_loss=0.06616, over 4826.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2398, pruned_loss=0.04704, over 951472.79 frames. ], batch size: 41, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:57:05,943 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8314, 1.7120, 1.5515, 1.4620, 1.8883, 1.6129, 1.8819, 1.8234], device='cuda:6'), covar=tensor([0.1361, 0.1862, 0.2775, 0.2385, 0.2336, 0.1744, 0.2535, 0.1652], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0189, 0.0236, 0.0252, 0.0249, 0.0208, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:57:07,733 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161669.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:57:11,732 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.453e+02 1.676e+02 2.058e+02 3.499e+02, threshold=3.351e+02, percent-clipped=0.0 2023-03-27 10:57:12,869 INFO [finetune.py:976] (6/7) Epoch 29, batch 1300, loss[loss=0.1679, simple_loss=0.2444, pruned_loss=0.04566, over 4848.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2374, pruned_loss=0.04647, over 953097.83 frames. ], batch size: 47, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:57:22,432 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9512, 1.4708, 2.0140, 2.0119, 1.8069, 1.7686, 1.9418, 1.9093], device='cuda:6'), covar=tensor([0.4352, 0.4296, 0.3546, 0.3906, 0.4777, 0.4009, 0.4962, 0.3033], device='cuda:6'), in_proj_covar=tensor([0.0271, 0.0251, 0.0271, 0.0300, 0.0299, 0.0276, 0.0306, 0.0255], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:57:29,155 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8605, 1.7058, 2.4364, 3.6178, 2.4901, 2.6306, 1.3606, 3.0918], device='cuda:6'), covar=tensor([0.1638, 0.1252, 0.1234, 0.0531, 0.0774, 0.1363, 0.1691, 0.0433], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0165, 0.0100, 0.0136, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:57:44,085 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2334, 2.9611, 2.6896, 1.5134, 2.8312, 2.2829, 2.2092, 2.7101], device='cuda:6'), covar=tensor([0.1013, 0.0744, 0.1881, 0.2138, 0.1598, 0.2609, 0.2184, 0.1145], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0188, 0.0200, 0.0179, 0.0207, 0.0209, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:57:54,742 INFO [finetune.py:976] (6/7) Epoch 29, batch 1350, loss[loss=0.2023, simple_loss=0.2775, pruned_loss=0.06354, over 4923.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2372, pruned_loss=0.04691, over 952630.68 frames. ], batch size: 38, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:58:21,489 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7051, 1.4576, 2.2422, 3.1598, 2.0971, 2.3546, 1.3965, 2.7351], device='cuda:6'), covar=tensor([0.1597, 0.1388, 0.1087, 0.0578, 0.0819, 0.1901, 0.1470, 0.0428], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0164, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 10:58:28,227 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.76 vs. limit=5.0 2023-03-27 10:58:29,771 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161757.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:58:42,753 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7055, 1.4696, 2.3191, 1.9557, 1.8067, 3.5293, 1.4556, 1.7706], device='cuda:6'), covar=tensor([0.0929, 0.1688, 0.1434, 0.0887, 0.1457, 0.0272, 0.1428, 0.1657], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0082, 0.0073, 0.0076, 0.0091, 0.0080, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 10:58:49,499 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.001e+02 1.561e+02 1.829e+02 2.338e+02 6.236e+02, threshold=3.658e+02, percent-clipped=4.0 2023-03-27 10:58:50,571 INFO [finetune.py:976] (6/7) Epoch 29, batch 1400, loss[loss=0.1996, simple_loss=0.2703, pruned_loss=0.06445, over 4792.00 frames. ], tot_loss[loss=0.1685, simple_loss=0.2412, pruned_loss=0.04791, over 952548.78 frames. ], batch size: 51, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:58:55,325 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6555, 1.4654, 1.1095, 0.3899, 1.2494, 1.4534, 1.3753, 1.4235], device='cuda:6'), covar=tensor([0.0954, 0.0855, 0.1391, 0.1938, 0.1331, 0.2166, 0.2177, 0.0820], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0188, 0.0199, 0.0179, 0.0207, 0.0208, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:59:07,151 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1346, 1.9246, 1.6692, 2.1084, 2.6263, 2.2029, 2.1828, 1.5876], device='cuda:6'), covar=tensor([0.2097, 0.2005, 0.1895, 0.1594, 0.1748, 0.1090, 0.1938, 0.1970], device='cuda:6'), in_proj_covar=tensor([0.0246, 0.0213, 0.0216, 0.0200, 0.0247, 0.0191, 0.0217, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:59:07,741 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2628, 2.2138, 2.4240, 1.5515, 2.2606, 2.4851, 2.4314, 1.8691], device='cuda:6'), covar=tensor([0.0607, 0.0663, 0.0597, 0.0845, 0.0752, 0.0635, 0.0595, 0.1206], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0139, 0.0141, 0.0119, 0.0129, 0.0141, 0.0141, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 10:59:17,538 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=161815.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:59:18,092 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=161816.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:59:23,989 INFO [finetune.py:976] (6/7) Epoch 29, batch 1450, loss[loss=0.1527, simple_loss=0.2283, pruned_loss=0.03854, over 4164.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.243, pruned_loss=0.04808, over 952604.69 frames. ], batch size: 18, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:59:55,391 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161872.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 10:59:56,504 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.509e+02 1.810e+02 2.171e+02 4.074e+02, threshold=3.620e+02, percent-clipped=1.0 2023-03-27 10:59:57,115 INFO [finetune.py:976] (6/7) Epoch 29, batch 1500, loss[loss=0.1636, simple_loss=0.2369, pruned_loss=0.04515, over 4772.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2429, pruned_loss=0.04785, over 954105.21 frames. ], batch size: 28, lr: 2.85e-03, grad_scale: 16.0 2023-03-27 10:59:58,325 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=161876.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:00:05,892 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=161887.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:00:27,815 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161920.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:00:32,887 INFO [finetune.py:976] (6/7) Epoch 29, batch 1550, loss[loss=0.1406, simple_loss=0.2152, pruned_loss=0.03296, over 4825.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2428, pruned_loss=0.04773, over 954995.73 frames. ], batch size: 33, lr: 2.85e-03, grad_scale: 32.0 2023-03-27 11:00:37,314 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.82 vs. limit=2.0 2023-03-27 11:00:44,438 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=161935.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:01:16,692 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.413e+02 1.667e+02 1.984e+02 3.261e+02, threshold=3.334e+02, percent-clipped=0.0 2023-03-27 11:01:17,325 INFO [finetune.py:976] (6/7) Epoch 29, batch 1600, loss[loss=0.1591, simple_loss=0.2385, pruned_loss=0.03987, over 4844.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2399, pruned_loss=0.04669, over 956053.97 frames. ], batch size: 49, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:01:59,455 INFO [finetune.py:976] (6/7) Epoch 29, batch 1650, loss[loss=0.1492, simple_loss=0.2194, pruned_loss=0.03949, over 4736.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2379, pruned_loss=0.04622, over 958589.64 frames. ], batch size: 54, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:02:22,372 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162057.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:02:29,019 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9152, 1.3832, 0.8605, 1.7146, 2.3079, 1.4751, 1.6055, 1.7224], device='cuda:6'), covar=tensor([0.1333, 0.1867, 0.1873, 0.1153, 0.1740, 0.1758, 0.1310, 0.1922], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0108, 0.0093, 0.0119, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:02:34,948 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.499e+01 1.468e+02 1.758e+02 2.171e+02 4.899e+02, threshold=3.516e+02, percent-clipped=3.0 2023-03-27 11:02:35,583 INFO [finetune.py:976] (6/7) Epoch 29, batch 1700, loss[loss=0.183, simple_loss=0.2476, pruned_loss=0.05925, over 4859.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2352, pruned_loss=0.04544, over 956405.00 frames. ], batch size: 31, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:02:36,909 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162077.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:01,316 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162105.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:06,797 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2716, 2.9529, 2.7640, 1.2756, 3.0099, 2.2592, 0.6469, 1.9397], device='cuda:6'), covar=tensor([0.2423, 0.2545, 0.1961, 0.3537, 0.1540, 0.1142, 0.4317, 0.1741], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0179, 0.0160, 0.0128, 0.0162, 0.0123, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 11:03:07,997 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162116.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:10,730 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 11:03:16,127 INFO [finetune.py:976] (6/7) Epoch 29, batch 1750, loss[loss=0.1452, simple_loss=0.2265, pruned_loss=0.03195, over 4793.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2395, pruned_loss=0.0475, over 958107.73 frames. ], batch size: 29, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:03:24,604 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162138.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:03:59,956 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162164.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:04:04,290 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162171.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:04:10,046 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.681e+01 1.626e+02 1.856e+02 2.284e+02 4.131e+02, threshold=3.712e+02, percent-clipped=2.0 2023-03-27 11:04:10,647 INFO [finetune.py:976] (6/7) Epoch 29, batch 1800, loss[loss=0.1269, simple_loss=0.1839, pruned_loss=0.03495, over 4152.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2421, pruned_loss=0.04791, over 957145.60 frames. ], batch size: 18, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:04:28,168 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2600, 1.7360, 2.5338, 4.2130, 2.9614, 2.9745, 1.1913, 3.8455], device='cuda:6'), covar=tensor([0.2011, 0.1964, 0.1604, 0.0778, 0.0898, 0.1481, 0.2196, 0.0421], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0131, 0.0163, 0.0099, 0.0134, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:04:44,453 INFO [finetune.py:976] (6/7) Epoch 29, batch 1850, loss[loss=0.1264, simple_loss=0.1937, pruned_loss=0.02957, over 3991.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2424, pruned_loss=0.04794, over 956904.90 frames. ], batch size: 17, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:04:48,180 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162231.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:05:12,332 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.25 vs. limit=2.0 2023-03-27 11:05:17,383 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.145e+02 1.604e+02 1.802e+02 2.267e+02 3.875e+02, threshold=3.605e+02, percent-clipped=1.0 2023-03-27 11:05:17,995 INFO [finetune.py:976] (6/7) Epoch 29, batch 1900, loss[loss=0.2002, simple_loss=0.27, pruned_loss=0.06518, over 4890.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2428, pruned_loss=0.04757, over 955625.34 frames. ], batch size: 32, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:05:18,084 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([5.2329, 4.5766, 4.7892, 5.1315, 5.0056, 4.7437, 5.3827, 1.5560], device='cuda:6'), covar=tensor([0.0734, 0.0890, 0.0783, 0.0899, 0.1161, 0.1577, 0.0478, 0.6051], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0284, 0.0296, 0.0337, 0.0285, 0.0305, 0.0301], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:05:28,462 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162292.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:05:37,219 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.15 vs. limit=2.0 2023-03-27 11:05:48,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6818, 3.7593, 3.5459, 1.8991, 3.9357, 2.9591, 1.0015, 2.7565], device='cuda:6'), covar=tensor([0.2466, 0.2101, 0.1669, 0.3229, 0.0984, 0.0987, 0.4350, 0.1470], device='cuda:6'), in_proj_covar=tensor([0.0150, 0.0179, 0.0159, 0.0128, 0.0162, 0.0123, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 11:05:51,672 INFO [finetune.py:976] (6/7) Epoch 29, batch 1950, loss[loss=0.1817, simple_loss=0.2494, pruned_loss=0.05698, over 4847.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2408, pruned_loss=0.04703, over 955362.95 frames. ], batch size: 44, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:06:06,883 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3514, 2.3654, 2.3623, 1.7485, 2.2088, 2.6337, 2.6112, 2.1024], device='cuda:6'), covar=tensor([0.0594, 0.0643, 0.0752, 0.0867, 0.1073, 0.0652, 0.0559, 0.1036], device='cuda:6'), in_proj_covar=tensor([0.0130, 0.0137, 0.0139, 0.0118, 0.0127, 0.0139, 0.0139, 0.0161], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:06:23,058 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4580, 2.3319, 1.8594, 2.4312, 2.3738, 2.1004, 2.7737, 2.3974], device='cuda:6'), covar=tensor([0.1303, 0.1915, 0.2939, 0.2454, 0.2514, 0.1761, 0.2542, 0.1783], device='cuda:6'), in_proj_covar=tensor([0.0189, 0.0190, 0.0236, 0.0253, 0.0250, 0.0208, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:06:28,868 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6481, 1.4643, 2.2534, 3.2615, 2.1798, 2.5040, 1.1598, 2.7618], device='cuda:6'), covar=tensor([0.1710, 0.1369, 0.1182, 0.0519, 0.0811, 0.1609, 0.1597, 0.0460], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0132, 0.0163, 0.0099, 0.0135, 0.0124, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:06:31,902 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.49 vs. limit=2.0 2023-03-27 11:06:36,624 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.724e+01 1.430e+02 1.855e+02 2.130e+02 3.474e+02, threshold=3.709e+02, percent-clipped=0.0 2023-03-27 11:06:37,252 INFO [finetune.py:976] (6/7) Epoch 29, batch 2000, loss[loss=0.1754, simple_loss=0.2461, pruned_loss=0.05231, over 4940.00 frames. ], tot_loss[loss=0.1657, simple_loss=0.2385, pruned_loss=0.04647, over 955378.27 frames. ], batch size: 33, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:07:01,060 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 11:07:14,705 INFO [finetune.py:976] (6/7) Epoch 29, batch 2050, loss[loss=0.1481, simple_loss=0.2157, pruned_loss=0.04028, over 4810.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2365, pruned_loss=0.04616, over 956688.91 frames. ], batch size: 45, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:07:19,657 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162433.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:07:32,445 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1954, 2.0010, 1.9747, 0.9669, 2.2800, 2.4595, 2.1245, 1.8728], device='cuda:6'), covar=tensor([0.1102, 0.0726, 0.0593, 0.0750, 0.0579, 0.0806, 0.0539, 0.0953], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0149, 0.0132, 0.0123, 0.0132, 0.0131, 0.0143, 0.0153], device='cuda:6'), out_proj_covar=tensor([8.8475e-05, 1.0647e-04, 9.3787e-05, 8.6250e-05, 9.2764e-05, 9.2580e-05, 1.0136e-04, 1.0908e-04], device='cuda:6') 2023-03-27 11:07:45,883 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:07:47,600 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.116e+01 1.601e+02 1.964e+02 2.330e+02 4.874e+02, threshold=3.928e+02, percent-clipped=3.0 2023-03-27 11:07:48,229 INFO [finetune.py:976] (6/7) Epoch 29, batch 2100, loss[loss=0.2262, simple_loss=0.2929, pruned_loss=0.07977, over 4840.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2365, pruned_loss=0.04634, over 956225.35 frames. ], batch size: 47, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:07:53,318 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.6840, 2.6723, 2.5902, 1.7379, 2.6171, 2.9177, 2.9260, 2.2841], device='cuda:6'), covar=tensor([0.0566, 0.0602, 0.0701, 0.0861, 0.0644, 0.0622, 0.0551, 0.1139], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0138, 0.0139, 0.0118, 0.0128, 0.0139, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:08:22,573 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:08:28,429 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162519.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:08:34,949 INFO [finetune.py:976] (6/7) Epoch 29, batch 2150, loss[loss=0.2099, simple_loss=0.2841, pruned_loss=0.06788, over 4258.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2392, pruned_loss=0.04679, over 955710.01 frames. ], batch size: 65, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:08:35,765 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.85 vs. limit=2.0 2023-03-27 11:09:17,083 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162572.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:09:18,163 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.053e+02 1.475e+02 1.779e+02 2.116e+02 3.329e+02, threshold=3.558e+02, percent-clipped=0.0 2023-03-27 11:09:18,785 INFO [finetune.py:976] (6/7) Epoch 29, batch 2200, loss[loss=0.1625, simple_loss=0.234, pruned_loss=0.04547, over 4766.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2417, pruned_loss=0.04744, over 955087.07 frames. ], batch size: 26, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:09:27,174 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162587.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:10:00,796 INFO [finetune.py:976] (6/7) Epoch 29, batch 2250, loss[loss=0.1584, simple_loss=0.2443, pruned_loss=0.0363, over 4787.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2432, pruned_loss=0.04832, over 952707.90 frames. ], batch size: 29, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:10:18,806 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7102, 1.6017, 1.5536, 1.6078, 1.4108, 3.3892, 1.4700, 1.8848], device='cuda:6'), covar=tensor([0.3893, 0.3003, 0.2356, 0.2850, 0.1657, 0.0307, 0.2432, 0.1223], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0093, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:10:32,202 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.24 vs. limit=2.0 2023-03-27 11:10:33,052 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0034, 1.6013, 2.3698, 1.4280, 1.9504, 2.2732, 1.4790, 2.2812], device='cuda:6'), covar=tensor([0.1400, 0.2281, 0.1149, 0.2059, 0.1144, 0.1505, 0.2868, 0.1156], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0206, 0.0193, 0.0190, 0.0175, 0.0213, 0.0218, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:10:33,518 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.222e+01 1.558e+02 1.861e+02 2.054e+02 5.864e+02, threshold=3.723e+02, percent-clipped=1.0 2023-03-27 11:10:34,117 INFO [finetune.py:976] (6/7) Epoch 29, batch 2300, loss[loss=0.1465, simple_loss=0.2286, pruned_loss=0.03225, over 4816.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2426, pruned_loss=0.04769, over 952840.96 frames. ], batch size: 33, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:10:44,040 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7157, 1.5367, 2.1131, 3.4023, 2.3048, 2.3889, 0.8622, 2.8461], device='cuda:6'), covar=tensor([0.1761, 0.1394, 0.1315, 0.0611, 0.0795, 0.1494, 0.2046, 0.0505], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0114, 0.0132, 0.0163, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:11:09,593 INFO [finetune.py:976] (6/7) Epoch 29, batch 2350, loss[loss=0.1439, simple_loss=0.2063, pruned_loss=0.04079, over 4830.00 frames. ], tot_loss[loss=0.1676, simple_loss=0.2406, pruned_loss=0.04729, over 953342.07 frames. ], batch size: 33, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:11:20,055 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162733.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:11:42,986 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8939, 1.3516, 1.9584, 1.9255, 1.7732, 1.7156, 1.9404, 1.8626], device='cuda:6'), covar=tensor([0.4030, 0.3922, 0.3028, 0.3662, 0.4376, 0.3634, 0.4120, 0.2846], device='cuda:6'), in_proj_covar=tensor([0.0271, 0.0250, 0.0270, 0.0300, 0.0299, 0.0277, 0.0304, 0.0254], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:11:50,493 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.467e+01 1.478e+02 1.866e+02 2.325e+02 4.091e+02, threshold=3.732e+02, percent-clipped=2.0 2023-03-27 11:11:51,107 INFO [finetune.py:976] (6/7) Epoch 29, batch 2400, loss[loss=0.1946, simple_loss=0.2538, pruned_loss=0.0677, over 4138.00 frames. ], tot_loss[loss=0.1653, simple_loss=0.2379, pruned_loss=0.04636, over 954386.08 frames. ], batch size: 65, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:11:56,643 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162781.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:12:18,084 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9961, 1.8291, 2.4049, 3.5329, 2.4271, 2.7544, 1.4986, 2.9582], device='cuda:6'), covar=tensor([0.1516, 0.1262, 0.1152, 0.0561, 0.0757, 0.1210, 0.1579, 0.0491], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0114, 0.0132, 0.0163, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:12:33,618 INFO [finetune.py:976] (6/7) Epoch 29, batch 2450, loss[loss=0.1705, simple_loss=0.2364, pruned_loss=0.05232, over 4904.00 frames. ], tot_loss[loss=0.163, simple_loss=0.235, pruned_loss=0.0455, over 953066.40 frames. ], batch size: 43, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:12:35,954 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=162827.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:12:54,358 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8686, 3.7947, 3.6569, 1.9382, 3.9864, 3.0529, 1.2579, 2.8429], device='cuda:6'), covar=tensor([0.2266, 0.2484, 0.1719, 0.3641, 0.1315, 0.0975, 0.4559, 0.1585], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0181, 0.0159, 0.0129, 0.0163, 0.0124, 0.0149, 0.0125], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 11:13:00,446 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1460, 1.4789, 0.8885, 1.9400, 2.3311, 1.7759, 1.7285, 1.9162], device='cuda:6'), covar=tensor([0.1420, 0.2097, 0.2121, 0.1170, 0.1912, 0.1946, 0.1431, 0.1962], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0109, 0.0093, 0.0119, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:13:02,268 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=162867.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 11:13:09,317 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.738e+01 1.481e+02 1.739e+02 2.016e+02 4.521e+02, threshold=3.479e+02, percent-clipped=1.0 2023-03-27 11:13:09,947 INFO [finetune.py:976] (6/7) Epoch 29, batch 2500, loss[loss=0.2173, simple_loss=0.2827, pruned_loss=0.07591, over 4819.00 frames. ], tot_loss[loss=0.1654, simple_loss=0.2373, pruned_loss=0.04678, over 952620.88 frames. ], batch size: 40, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:13:27,467 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=162887.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:13:28,105 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=162888.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:13:52,411 INFO [finetune.py:976] (6/7) Epoch 29, batch 2550, loss[loss=0.1734, simple_loss=0.249, pruned_loss=0.04884, over 4831.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2407, pruned_loss=0.0475, over 952246.32 frames. ], batch size: 47, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:13:59,196 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9693, 1.8510, 1.6009, 1.4699, 1.9792, 1.7231, 1.8445, 1.9702], device='cuda:6'), covar=tensor([0.1399, 0.1883, 0.2956, 0.2420, 0.2669, 0.1745, 0.2784, 0.1764], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0190, 0.0235, 0.0252, 0.0250, 0.0208, 0.0215, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:14:01,595 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=162935.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:14:17,181 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 11:14:33,651 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.9219, 2.6261, 2.4795, 1.2699, 2.6168, 2.1744, 1.9898, 2.4852], device='cuda:6'), covar=tensor([0.1272, 0.0798, 0.2014, 0.2235, 0.1593, 0.2125, 0.2181, 0.1178], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0187, 0.0200, 0.0180, 0.0207, 0.0209, 0.0221, 0.0193], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:14:36,962 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.986e+01 1.516e+02 1.791e+02 2.248e+02 3.208e+02, threshold=3.582e+02, percent-clipped=0.0 2023-03-27 11:14:37,597 INFO [finetune.py:976] (6/7) Epoch 29, batch 2600, loss[loss=0.2047, simple_loss=0.278, pruned_loss=0.06569, over 4803.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2432, pruned_loss=0.04843, over 953139.32 frames. ], batch size: 45, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:14:47,239 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4325, 1.3704, 0.8956, 0.2989, 1.2075, 1.3391, 1.3325, 1.2502], device='cuda:6'), covar=tensor([0.0774, 0.0787, 0.1245, 0.1701, 0.1175, 0.1899, 0.1918, 0.0824], device='cuda:6'), in_proj_covar=tensor([0.0170, 0.0187, 0.0201, 0.0180, 0.0208, 0.0209, 0.0222, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:14:48,963 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 11:15:17,742 INFO [finetune.py:976] (6/7) Epoch 29, batch 2650, loss[loss=0.176, simple_loss=0.2648, pruned_loss=0.04359, over 4815.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2427, pruned_loss=0.04825, over 951939.53 frames. ], batch size: 38, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:15:23,188 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8139, 1.2294, 0.9948, 1.5984, 2.2150, 1.4354, 1.4502, 1.6078], device='cuda:6'), covar=tensor([0.1473, 0.2227, 0.1951, 0.1286, 0.1913, 0.1845, 0.1602, 0.2122], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0108, 0.0092, 0.0119, 0.0091, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:15:29,782 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7420, 1.6833, 1.4647, 1.4664, 1.8140, 1.5493, 1.8542, 1.7746], device='cuda:6'), covar=tensor([0.1327, 0.1755, 0.2667, 0.2277, 0.2482, 0.1563, 0.2436, 0.1540], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0190, 0.0236, 0.0253, 0.0250, 0.0208, 0.0216, 0.0203], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:15:51,095 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.228e+01 1.484e+02 1.662e+02 2.061e+02 3.704e+02, threshold=3.324e+02, percent-clipped=1.0 2023-03-27 11:15:51,717 INFO [finetune.py:976] (6/7) Epoch 29, batch 2700, loss[loss=0.1691, simple_loss=0.246, pruned_loss=0.04609, over 4814.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2423, pruned_loss=0.04794, over 951648.29 frames. ], batch size: 38, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:16:02,231 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=163090.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:16:14,565 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0212, 1.4492, 2.0776, 2.0557, 1.8350, 1.8154, 2.0124, 1.9457], device='cuda:6'), covar=tensor([0.3787, 0.3841, 0.3111, 0.3472, 0.4700, 0.4035, 0.4170, 0.2866], device='cuda:6'), in_proj_covar=tensor([0.0269, 0.0249, 0.0269, 0.0299, 0.0298, 0.0276, 0.0304, 0.0254], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:16:24,083 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.18 vs. limit=5.0 2023-03-27 11:16:25,136 INFO [finetune.py:976] (6/7) Epoch 29, batch 2750, loss[loss=0.1349, simple_loss=0.2063, pruned_loss=0.03177, over 4767.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2387, pruned_loss=0.04701, over 951672.85 frames. ], batch size: 23, lr: 2.84e-03, grad_scale: 32.0 2023-03-27 11:16:41,859 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2344, 1.3642, 0.9332, 1.9479, 2.5283, 1.9075, 1.6491, 1.9281], device='cuda:6'), covar=tensor([0.1224, 0.2022, 0.1859, 0.1058, 0.1613, 0.1829, 0.1370, 0.1911], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0108, 0.0093, 0.0119, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:16:52,780 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=163151.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:17:03,844 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163167.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:17:05,652 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8250, 1.2672, 0.8386, 1.5733, 2.2698, 1.1594, 1.4959, 1.6314], device='cuda:6'), covar=tensor([0.1325, 0.1984, 0.1774, 0.1163, 0.1716, 0.1836, 0.1378, 0.1860], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0093, 0.0109, 0.0093, 0.0119, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:17:10,460 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.013e+02 1.483e+02 1.714e+02 1.995e+02 3.279e+02, threshold=3.429e+02, percent-clipped=0.0 2023-03-27 11:17:10,476 INFO [finetune.py:976] (6/7) Epoch 29, batch 2800, loss[loss=0.1859, simple_loss=0.2526, pruned_loss=0.05967, over 4912.00 frames. ], tot_loss[loss=0.1625, simple_loss=0.2345, pruned_loss=0.04523, over 949393.15 frames. ], batch size: 36, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:17:15,462 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=163183.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:17:35,986 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.16 vs. limit=2.0 2023-03-27 11:17:37,948 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163215.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:17:44,449 INFO [finetune.py:976] (6/7) Epoch 29, batch 2850, loss[loss=0.1738, simple_loss=0.245, pruned_loss=0.05128, over 4815.00 frames. ], tot_loss[loss=0.1614, simple_loss=0.2337, pruned_loss=0.04453, over 950451.98 frames. ], batch size: 38, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:17:59,038 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=163240.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:17:59,648 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7138, 1.6146, 1.3906, 1.7703, 2.1803, 1.8134, 1.7170, 1.3653], device='cuda:6'), covar=tensor([0.2256, 0.2121, 0.2176, 0.1678, 0.1707, 0.1378, 0.2294, 0.2130], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0213, 0.0217, 0.0199, 0.0246, 0.0191, 0.0218, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:18:31,674 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.921e+01 1.528e+02 1.890e+02 2.266e+02 3.566e+02, threshold=3.779e+02, percent-clipped=1.0 2023-03-27 11:18:31,690 INFO [finetune.py:976] (6/7) Epoch 29, batch 2900, loss[loss=0.1385, simple_loss=0.2161, pruned_loss=0.03042, over 4710.00 frames. ], tot_loss[loss=0.163, simple_loss=0.2358, pruned_loss=0.04509, over 950068.17 frames. ], batch size: 23, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:18:33,623 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5835, 1.5476, 2.1883, 1.8111, 1.7449, 3.7138, 1.5637, 1.7541], device='cuda:6'), covar=tensor([0.1091, 0.1945, 0.1079, 0.1033, 0.1710, 0.0270, 0.1606, 0.2034], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0083, 0.0074, 0.0077, 0.0092, 0.0081, 0.0086, 0.0081], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 11:18:38,924 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.1228, 1.8969, 2.0632, 0.9134, 2.3693, 2.5435, 2.1765, 1.7963], device='cuda:6'), covar=tensor([0.1025, 0.0842, 0.0545, 0.0707, 0.0498, 0.0747, 0.0570, 0.0907], device='cuda:6'), in_proj_covar=tensor([0.0120, 0.0147, 0.0131, 0.0122, 0.0131, 0.0130, 0.0142, 0.0152], device='cuda:6'), out_proj_covar=tensor([8.7871e-05, 1.0573e-04, 9.3068e-05, 8.5318e-05, 9.1872e-05, 9.1989e-05, 1.0058e-04, 1.0841e-04], device='cuda:6') 2023-03-27 11:18:52,054 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=163301.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:19:08,251 INFO [finetune.py:976] (6/7) Epoch 29, batch 2950, loss[loss=0.1624, simple_loss=0.2368, pruned_loss=0.04404, over 4895.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2402, pruned_loss=0.04664, over 952934.59 frames. ], batch size: 32, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:19:20,691 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.43 vs. limit=5.0 2023-03-27 11:19:21,702 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.39 vs. limit=2.0 2023-03-27 11:19:37,287 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7892, 4.1917, 4.3731, 4.5669, 4.5124, 4.1923, 4.8965, 1.6597], device='cuda:6'), covar=tensor([0.0699, 0.0897, 0.0910, 0.0944, 0.1094, 0.1558, 0.0532, 0.5692], device='cuda:6'), in_proj_covar=tensor([0.0351, 0.0245, 0.0286, 0.0295, 0.0338, 0.0285, 0.0304, 0.0302], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:19:45,373 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7006, 2.4411, 2.0173, 1.0020, 2.2864, 2.1673, 1.9044, 2.3390], device='cuda:6'), covar=tensor([0.0826, 0.0754, 0.1551, 0.2035, 0.1084, 0.1951, 0.2079, 0.0831], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0188, 0.0201, 0.0181, 0.0209, 0.0210, 0.0224, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:19:48,846 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2883, 2.2053, 2.0681, 1.2266, 2.1791, 1.9085, 1.7554, 2.1481], device='cuda:6'), covar=tensor([0.1110, 0.0733, 0.1329, 0.1868, 0.1263, 0.1970, 0.1963, 0.0884], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0188, 0.0201, 0.0181, 0.0209, 0.0210, 0.0224, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:19:49,295 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.216e+02 1.530e+02 1.927e+02 2.309e+02 4.929e+02, threshold=3.855e+02, percent-clipped=3.0 2023-03-27 11:19:49,311 INFO [finetune.py:976] (6/7) Epoch 29, batch 3000, loss[loss=0.1602, simple_loss=0.2399, pruned_loss=0.04024, over 4094.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.241, pruned_loss=0.0466, over 952580.72 frames. ], batch size: 65, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:19:49,311 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 11:19:52,891 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6190, 1.5492, 1.5143, 1.5841, 1.1756, 2.9735, 1.2619, 1.6710], device='cuda:6'), covar=tensor([0.3284, 0.2347, 0.2082, 0.2205, 0.1713, 0.0264, 0.2512, 0.1188], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:19:56,071 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5730, 3.5282, 3.4556, 1.5036, 3.6356, 2.8783, 0.8278, 2.4091], device='cuda:6'), covar=tensor([0.2011, 0.1993, 0.1521, 0.3486, 0.1096, 0.0959, 0.3856, 0.1563], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0181, 0.0160, 0.0130, 0.0163, 0.0124, 0.0150, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 11:20:05,061 INFO [finetune.py:1010] (6/7) Epoch 29, validation: loss=0.158, simple_loss=0.2251, pruned_loss=0.04545, over 2265189.00 frames. 2023-03-27 11:20:05,062 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 11:20:24,017 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1049, 2.1040, 2.1554, 1.4186, 1.9943, 2.2696, 2.2083, 1.7153], device='cuda:6'), covar=tensor([0.0598, 0.0636, 0.0692, 0.0888, 0.0754, 0.0625, 0.0529, 0.1138], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0140, 0.0118, 0.0128, 0.0139, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:20:36,201 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6376, 1.6331, 2.0201, 3.3167, 2.2619, 2.3780, 1.0353, 2.8418], device='cuda:6'), covar=tensor([0.1678, 0.1248, 0.1238, 0.0554, 0.0744, 0.1150, 0.1792, 0.0421], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0132, 0.0163, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:20:43,037 INFO [finetune.py:976] (6/7) Epoch 29, batch 3050, loss[loss=0.1595, simple_loss=0.2429, pruned_loss=0.03804, over 4763.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.2435, pruned_loss=0.04762, over 952163.69 frames. ], batch size: 54, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:20:57,925 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=163446.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:21:02,228 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6963, 1.6298, 1.5736, 1.6147, 1.1650, 3.6417, 1.4323, 1.8503], device='cuda:6'), covar=tensor([0.3273, 0.2407, 0.2123, 0.2384, 0.1778, 0.0198, 0.2583, 0.1208], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:21:16,334 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.820e+01 1.385e+02 1.622e+02 1.937e+02 3.745e+02, threshold=3.244e+02, percent-clipped=0.0 2023-03-27 11:21:16,350 INFO [finetune.py:976] (6/7) Epoch 29, batch 3100, loss[loss=0.1806, simple_loss=0.2467, pruned_loss=0.05729, over 4906.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2411, pruned_loss=0.04668, over 953319.90 frames. ], batch size: 43, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:21:22,349 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163483.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:21:41,994 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9739, 1.8580, 1.7816, 1.8385, 1.5390, 4.5545, 1.7139, 2.1483], device='cuda:6'), covar=tensor([0.3277, 0.2429, 0.2050, 0.2314, 0.1549, 0.0124, 0.2425, 0.1179], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:21:51,333 INFO [finetune.py:976] (6/7) Epoch 29, batch 3150, loss[loss=0.1534, simple_loss=0.223, pruned_loss=0.04191, over 4881.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2372, pruned_loss=0.04524, over 954331.64 frames. ], batch size: 31, lr: 2.84e-03, grad_scale: 16.0 2023-03-27 11:21:55,512 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163531.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:22:27,824 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.27 vs. limit=2.0 2023-03-27 11:22:33,524 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.004e+02 1.497e+02 1.704e+02 2.227e+02 4.213e+02, threshold=3.407e+02, percent-clipped=5.0 2023-03-27 11:22:33,540 INFO [finetune.py:976] (6/7) Epoch 29, batch 3200, loss[loss=0.167, simple_loss=0.2346, pruned_loss=0.04972, over 4823.00 frames. ], tot_loss[loss=0.1619, simple_loss=0.2343, pruned_loss=0.04479, over 953530.03 frames. ], batch size: 39, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:22:49,214 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=163596.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:22:59,113 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5483, 1.4740, 1.9217, 2.9265, 1.9597, 2.2100, 0.8459, 2.5272], device='cuda:6'), covar=tensor([0.1876, 0.1437, 0.1297, 0.0684, 0.0904, 0.1589, 0.2059, 0.0514], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:23:01,317 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.34 vs. limit=2.0 2023-03-27 11:23:07,475 INFO [finetune.py:976] (6/7) Epoch 29, batch 3250, loss[loss=0.1246, simple_loss=0.1942, pruned_loss=0.02752, over 4160.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.237, pruned_loss=0.04644, over 952946.17 frames. ], batch size: 17, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:23:52,620 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.061e+02 1.629e+02 1.981e+02 2.506e+02 4.570e+02, threshold=3.962e+02, percent-clipped=6.0 2023-03-27 11:23:52,636 INFO [finetune.py:976] (6/7) Epoch 29, batch 3300, loss[loss=0.1579, simple_loss=0.2454, pruned_loss=0.03523, over 4760.00 frames. ], tot_loss[loss=0.1669, simple_loss=0.2393, pruned_loss=0.04722, over 952349.98 frames. ], batch size: 28, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:24:29,887 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4318, 1.2346, 1.3365, 1.2360, 1.6036, 1.5780, 1.3991, 1.2551], device='cuda:6'), covar=tensor([0.0540, 0.0343, 0.0587, 0.0405, 0.0333, 0.0404, 0.0375, 0.0479], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0106, 0.0148, 0.0112, 0.0103, 0.0117, 0.0105, 0.0115], device='cuda:6'), out_proj_covar=tensor([7.9039e-05, 8.1144e-05, 1.1525e-04, 8.4967e-05, 7.9363e-05, 8.6344e-05, 7.7773e-05, 8.7268e-05], device='cuda:6') 2023-03-27 11:24:34,739 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.65 vs. limit=2.0 2023-03-27 11:24:37,007 INFO [finetune.py:976] (6/7) Epoch 29, batch 3350, loss[loss=0.1309, simple_loss=0.197, pruned_loss=0.03235, over 3818.00 frames. ], tot_loss[loss=0.1702, simple_loss=0.243, pruned_loss=0.04875, over 949504.41 frames. ], batch size: 16, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:24:55,818 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:25:21,353 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.486e+02 1.801e+02 2.174e+02 4.171e+02, threshold=3.603e+02, percent-clipped=1.0 2023-03-27 11:25:21,369 INFO [finetune.py:976] (6/7) Epoch 29, batch 3400, loss[loss=0.151, simple_loss=0.2392, pruned_loss=0.03137, over 4814.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.2442, pruned_loss=0.04928, over 952735.33 frames. ], batch size: 40, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:25:37,709 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163794.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:25:54,567 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.84 vs. limit=2.0 2023-03-27 11:25:58,982 INFO [finetune.py:976] (6/7) Epoch 29, batch 3450, loss[loss=0.1586, simple_loss=0.231, pruned_loss=0.0431, over 4764.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2429, pruned_loss=0.0484, over 952152.44 frames. ], batch size: 23, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:26:06,001 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.14 vs. limit=2.0 2023-03-27 11:26:41,241 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.103e+01 1.539e+02 1.877e+02 2.200e+02 3.171e+02, threshold=3.754e+02, percent-clipped=0.0 2023-03-27 11:26:41,257 INFO [finetune.py:976] (6/7) Epoch 29, batch 3500, loss[loss=0.1389, simple_loss=0.2194, pruned_loss=0.02919, over 4823.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2407, pruned_loss=0.04752, over 951846.64 frames. ], batch size: 40, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:26:55,493 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=163896.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:27:17,137 INFO [finetune.py:976] (6/7) Epoch 29, batch 3550, loss[loss=0.1517, simple_loss=0.219, pruned_loss=0.04224, over 4875.00 frames. ], tot_loss[loss=0.166, simple_loss=0.2383, pruned_loss=0.04683, over 954632.72 frames. ], batch size: 34, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:27:29,364 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4876, 1.3848, 0.9397, 0.2675, 1.2314, 1.3348, 1.3766, 1.3042], device='cuda:6'), covar=tensor([0.0780, 0.0673, 0.1196, 0.1678, 0.1188, 0.2131, 0.2050, 0.0779], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0190, 0.0203, 0.0182, 0.0211, 0.0212, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:27:38,584 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=163944.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:27:51,232 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.95 vs. limit=5.0 2023-03-27 11:27:59,311 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.875e+01 1.446e+02 1.734e+02 2.162e+02 4.667e+02, threshold=3.468e+02, percent-clipped=1.0 2023-03-27 11:27:59,327 INFO [finetune.py:976] (6/7) Epoch 29, batch 3600, loss[loss=0.1607, simple_loss=0.2284, pruned_loss=0.04645, over 4715.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2365, pruned_loss=0.04672, over 956239.28 frames. ], batch size: 23, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:28:36,275 INFO [finetune.py:976] (6/7) Epoch 29, batch 3650, loss[loss=0.1641, simple_loss=0.2263, pruned_loss=0.05093, over 4166.00 frames. ], tot_loss[loss=0.1701, simple_loss=0.2412, pruned_loss=0.04947, over 953938.64 frames. ], batch size: 18, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:29:19,113 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.932e+01 1.541e+02 1.828e+02 2.331e+02 7.133e+02, threshold=3.656e+02, percent-clipped=4.0 2023-03-27 11:29:19,129 INFO [finetune.py:976] (6/7) Epoch 29, batch 3700, loss[loss=0.1475, simple_loss=0.2169, pruned_loss=0.03902, over 4716.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.2428, pruned_loss=0.04928, over 951403.24 frames. ], batch size: 23, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:29:37,329 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164090.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:30:00,906 INFO [finetune.py:976] (6/7) Epoch 29, batch 3750, loss[loss=0.1701, simple_loss=0.2559, pruned_loss=0.04218, over 4832.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2425, pruned_loss=0.04852, over 951784.83 frames. ], batch size: 47, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:30:07,063 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2315, 1.5191, 0.9849, 1.9949, 2.3984, 1.7908, 1.6857, 1.8485], device='cuda:6'), covar=tensor([0.1369, 0.2000, 0.1765, 0.1146, 0.1872, 0.1847, 0.1461, 0.1942], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0120, 0.0092, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 11:30:17,253 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164151.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:30:20,049 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164155.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:30:36,053 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.94 vs. limit=2.0 2023-03-27 11:30:37,149 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.292e+01 1.655e+02 1.832e+02 2.179e+02 3.362e+02, threshold=3.664e+02, percent-clipped=0.0 2023-03-27 11:30:37,165 INFO [finetune.py:976] (6/7) Epoch 29, batch 3800, loss[loss=0.207, simple_loss=0.2785, pruned_loss=0.0678, over 4879.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.2431, pruned_loss=0.04829, over 951869.49 frames. ], batch size: 35, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:31:03,882 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164216.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:31:09,781 INFO [finetune.py:976] (6/7) Epoch 29, batch 3850, loss[loss=0.1384, simple_loss=0.2139, pruned_loss=0.03148, over 4305.00 frames. ], tot_loss[loss=0.1683, simple_loss=0.2415, pruned_loss=0.04748, over 953431.67 frames. ], batch size: 65, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:31:10,370 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5945, 1.5872, 1.1221, 0.3341, 1.3371, 1.4685, 1.4827, 1.5049], device='cuda:6'), covar=tensor([0.1013, 0.0800, 0.1404, 0.2147, 0.1365, 0.2691, 0.2448, 0.0819], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0190, 0.0203, 0.0182, 0.0211, 0.0212, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:31:20,481 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.0517, 4.9235, 4.7034, 2.9096, 5.0300, 3.9888, 0.9744, 3.6158], device='cuda:6'), covar=tensor([0.2070, 0.1770, 0.1168, 0.2620, 0.0747, 0.0695, 0.4315, 0.1157], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0180, 0.0161, 0.0131, 0.0163, 0.0124, 0.0150, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 11:31:23,610 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3110, 1.9337, 2.5432, 1.7277, 2.1973, 2.6330, 1.8639, 2.7028], device='cuda:6'), covar=tensor([0.1144, 0.1984, 0.1355, 0.1813, 0.1018, 0.1193, 0.2471, 0.0696], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0203, 0.0190, 0.0187, 0.0172, 0.0209, 0.0214, 0.0194], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:31:45,642 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.748e+01 1.394e+02 1.747e+02 2.221e+02 3.425e+02, threshold=3.494e+02, percent-clipped=0.0 2023-03-27 11:31:45,658 INFO [finetune.py:976] (6/7) Epoch 29, batch 3900, loss[loss=0.175, simple_loss=0.244, pruned_loss=0.05303, over 4849.00 frames. ], tot_loss[loss=0.1667, simple_loss=0.2392, pruned_loss=0.04708, over 955360.91 frames. ], batch size: 49, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:32:27,499 INFO [finetune.py:976] (6/7) Epoch 29, batch 3950, loss[loss=0.16, simple_loss=0.2296, pruned_loss=0.04523, over 4784.00 frames. ], tot_loss[loss=0.1641, simple_loss=0.236, pruned_loss=0.04609, over 956610.59 frames. ], batch size: 25, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:32:58,624 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164360.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:32:58,911 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.98 vs. limit=5.0 2023-03-27 11:33:03,093 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4659, 1.4129, 1.2507, 1.5536, 1.5028, 1.5715, 1.0003, 1.2495], device='cuda:6'), covar=tensor([0.2182, 0.1965, 0.2012, 0.1630, 0.1817, 0.1248, 0.2690, 0.1871], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0212, 0.0217, 0.0199, 0.0246, 0.0191, 0.0218, 0.0206], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:33:11,862 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.819e+01 1.491e+02 1.748e+02 1.970e+02 3.605e+02, threshold=3.496e+02, percent-clipped=1.0 2023-03-27 11:33:11,878 INFO [finetune.py:976] (6/7) Epoch 29, batch 4000, loss[loss=0.1699, simple_loss=0.2486, pruned_loss=0.04557, over 4850.00 frames. ], tot_loss[loss=0.1638, simple_loss=0.2358, pruned_loss=0.04586, over 957880.70 frames. ], batch size: 49, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:33:42,971 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164421.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:33:45,304 INFO [finetune.py:976] (6/7) Epoch 29, batch 4050, loss[loss=0.1363, simple_loss=0.2231, pruned_loss=0.02476, over 4753.00 frames. ], tot_loss[loss=0.1673, simple_loss=0.2399, pruned_loss=0.0474, over 957870.41 frames. ], batch size: 27, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:34:06,779 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164446.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:34:15,376 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6561, 1.5744, 1.4977, 1.5669, 1.1668, 3.4230, 1.4032, 1.7289], device='cuda:6'), covar=tensor([0.3524, 0.2608, 0.2223, 0.2441, 0.1811, 0.0244, 0.2790, 0.1285], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0113, 0.0096, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:34:29,031 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.788e+01 1.600e+02 1.814e+02 2.182e+02 3.811e+02, threshold=3.628e+02, percent-clipped=1.0 2023-03-27 11:34:29,047 INFO [finetune.py:976] (6/7) Epoch 29, batch 4100, loss[loss=0.2007, simple_loss=0.2689, pruned_loss=0.06623, over 4890.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2412, pruned_loss=0.04722, over 955904.85 frames. ], batch size: 35, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:35:04,641 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164511.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:35:13,893 INFO [finetune.py:976] (6/7) Epoch 29, batch 4150, loss[loss=0.1717, simple_loss=0.2469, pruned_loss=0.04826, over 4921.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2429, pruned_loss=0.04827, over 956758.49 frames. ], batch size: 33, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:35:42,372 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164568.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:35:46,475 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.025e+02 1.582e+02 1.835e+02 2.360e+02 4.097e+02, threshold=3.670e+02, percent-clipped=1.0 2023-03-27 11:35:46,491 INFO [finetune.py:976] (6/7) Epoch 29, batch 4200, loss[loss=0.1393, simple_loss=0.22, pruned_loss=0.02927, over 4763.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2425, pruned_loss=0.04755, over 954366.08 frames. ], batch size: 26, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:35:48,884 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164578.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:35:53,151 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.39 vs. limit=5.0 2023-03-27 11:36:20,292 INFO [finetune.py:976] (6/7) Epoch 29, batch 4250, loss[loss=0.1837, simple_loss=0.2569, pruned_loss=0.05521, over 4808.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2408, pruned_loss=0.04705, over 955808.51 frames. ], batch size: 25, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:36:23,235 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164629.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:36:29,288 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164639.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:36:43,157 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164658.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:36:53,779 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.101e+02 1.450e+02 1.663e+02 2.112e+02 3.483e+02, threshold=3.326e+02, percent-clipped=0.0 2023-03-27 11:36:53,795 INFO [finetune.py:976] (6/7) Epoch 29, batch 4300, loss[loss=0.1917, simple_loss=0.2528, pruned_loss=0.06534, over 4828.00 frames. ], tot_loss[loss=0.1655, simple_loss=0.2382, pruned_loss=0.04645, over 955747.42 frames. ], batch size: 40, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:37:39,434 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164716.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:37:41,319 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164719.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:37:44,824 INFO [finetune.py:976] (6/7) Epoch 29, batch 4350, loss[loss=0.1762, simple_loss=0.2385, pruned_loss=0.05698, over 4908.00 frames. ], tot_loss[loss=0.1629, simple_loss=0.235, pruned_loss=0.04546, over 954540.14 frames. ], batch size: 36, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:37:56,920 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8698, 1.1545, 1.9308, 1.8639, 1.7102, 1.6757, 1.7752, 1.8966], device='cuda:6'), covar=tensor([0.4139, 0.3947, 0.3192, 0.3585, 0.4775, 0.3670, 0.4301, 0.2901], device='cuda:6'), in_proj_covar=tensor([0.0271, 0.0251, 0.0271, 0.0301, 0.0300, 0.0278, 0.0306, 0.0256], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:37:58,710 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=164746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:37:58,723 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=164746.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:20,797 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.196e+01 1.395e+02 1.786e+02 2.060e+02 3.738e+02, threshold=3.572e+02, percent-clipped=2.0 2023-03-27 11:38:20,813 INFO [finetune.py:976] (6/7) Epoch 29, batch 4400, loss[loss=0.1885, simple_loss=0.2673, pruned_loss=0.05483, over 4932.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2355, pruned_loss=0.0455, over 953979.08 frames. ], batch size: 38, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:38:33,467 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=164794.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:34,766 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.3266, 2.9211, 2.8088, 1.5020, 2.9243, 2.3756, 2.3143, 2.8009], device='cuda:6'), covar=tensor([0.1112, 0.0825, 0.1752, 0.2138, 0.1456, 0.2137, 0.1961, 0.1061], device='cuda:6'), in_proj_covar=tensor([0.0173, 0.0192, 0.0205, 0.0183, 0.0213, 0.0213, 0.0225, 0.0199], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:38:43,382 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=164807.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:46,308 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=164811.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:38:54,767 INFO [finetune.py:976] (6/7) Epoch 29, batch 4450, loss[loss=0.191, simple_loss=0.2724, pruned_loss=0.05476, over 4937.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2397, pruned_loss=0.04661, over 953212.97 frames. ], batch size: 33, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:39:08,323 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.95 vs. limit=2.0 2023-03-27 11:39:23,086 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=164859.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:39:37,137 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.009e+02 1.552e+02 1.889e+02 2.217e+02 4.905e+02, threshold=3.778e+02, percent-clipped=1.0 2023-03-27 11:39:37,153 INFO [finetune.py:976] (6/7) Epoch 29, batch 4500, loss[loss=0.1708, simple_loss=0.2457, pruned_loss=0.04795, over 4903.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2413, pruned_loss=0.04685, over 954431.26 frames. ], batch size: 43, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:40:11,927 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0996, 1.6446, 2.5645, 4.0586, 2.7953, 2.8625, 1.1271, 3.5788], device='cuda:6'), covar=tensor([0.1644, 0.1492, 0.1240, 0.0493, 0.0752, 0.1346, 0.1826, 0.0312], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0131, 0.0163, 0.0099, 0.0134, 0.0123, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:40:22,141 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164924.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:40:22,678 INFO [finetune.py:976] (6/7) Epoch 29, batch 4550, loss[loss=0.1653, simple_loss=0.2409, pruned_loss=0.0449, over 4849.00 frames. ], tot_loss[loss=0.168, simple_loss=0.242, pruned_loss=0.04701, over 955008.56 frames. ], batch size: 44, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:40:28,156 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=164934.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:40:55,994 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.494e+02 1.764e+02 2.048e+02 3.220e+02, threshold=3.528e+02, percent-clipped=0.0 2023-03-27 11:40:56,010 INFO [finetune.py:976] (6/7) Epoch 29, batch 4600, loss[loss=0.1727, simple_loss=0.262, pruned_loss=0.04167, over 4874.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2406, pruned_loss=0.04618, over 955323.28 frames. ], batch size: 34, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:41:22,219 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165014.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:41:23,390 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165016.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:41:29,265 INFO [finetune.py:976] (6/7) Epoch 29, batch 4650, loss[loss=0.1553, simple_loss=0.224, pruned_loss=0.04329, over 4814.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2409, pruned_loss=0.04743, over 953632.87 frames. ], batch size: 45, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:41:42,178 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.79 vs. limit=2.0 2023-03-27 11:41:53,746 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165064.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:42:01,329 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.062e+02 1.478e+02 1.798e+02 2.114e+02 1.110e+03, threshold=3.597e+02, percent-clipped=3.0 2023-03-27 11:42:01,345 INFO [finetune.py:976] (6/7) Epoch 29, batch 4700, loss[loss=0.1582, simple_loss=0.2339, pruned_loss=0.04125, over 4859.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.2383, pruned_loss=0.04704, over 954232.59 frames. ], batch size: 49, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:42:27,140 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165102.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:42:43,764 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165123.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:42:44,886 INFO [finetune.py:976] (6/7) Epoch 29, batch 4750, loss[loss=0.1487, simple_loss=0.2035, pruned_loss=0.04697, over 4250.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2361, pruned_loss=0.04635, over 953404.33 frames. ], batch size: 18, lr: 2.83e-03, grad_scale: 16.0 2023-03-27 11:43:06,870 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.2975, 2.9428, 3.0844, 3.2337, 3.0900, 2.9084, 3.3506, 0.9816], device='cuda:6'), covar=tensor([0.1169, 0.1082, 0.1225, 0.1298, 0.1645, 0.1986, 0.1131, 0.5915], device='cuda:6'), in_proj_covar=tensor([0.0355, 0.0248, 0.0288, 0.0298, 0.0339, 0.0288, 0.0308, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:43:21,524 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.139e+02 1.579e+02 1.786e+02 2.031e+02 3.721e+02, threshold=3.572e+02, percent-clipped=1.0 2023-03-27 11:43:21,540 INFO [finetune.py:976] (6/7) Epoch 29, batch 4800, loss[loss=0.2276, simple_loss=0.3015, pruned_loss=0.07685, over 4842.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2392, pruned_loss=0.04742, over 955587.67 frames. ], batch size: 47, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:43:27,576 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165184.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:43:53,939 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165224.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:43:54,433 INFO [finetune.py:976] (6/7) Epoch 29, batch 4850, loss[loss=0.1552, simple_loss=0.2227, pruned_loss=0.04383, over 4435.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2425, pruned_loss=0.0483, over 954261.20 frames. ], batch size: 19, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:44:00,486 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165234.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 11:44:22,087 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.6785, 1.4672, 1.4478, 0.7969, 1.6580, 1.8371, 1.7472, 1.3917], device='cuda:6'), covar=tensor([0.0990, 0.0723, 0.0552, 0.0618, 0.0494, 0.0557, 0.0384, 0.0724], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0147, 0.0131, 0.0121, 0.0131, 0.0130, 0.0141, 0.0151], device='cuda:6'), out_proj_covar=tensor([8.8130e-05, 1.0505e-04, 9.2971e-05, 8.4948e-05, 9.1720e-05, 9.2080e-05, 1.0047e-04, 1.0795e-04], device='cuda:6') 2023-03-27 11:44:24,479 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165272.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:44:26,734 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.028e+02 1.742e+02 1.982e+02 2.317e+02 4.079e+02, threshold=3.965e+02, percent-clipped=3.0 2023-03-27 11:44:26,750 INFO [finetune.py:976] (6/7) Epoch 29, batch 4900, loss[loss=0.19, simple_loss=0.2636, pruned_loss=0.05819, over 4798.00 frames. ], tot_loss[loss=0.1717, simple_loss=0.245, pruned_loss=0.04917, over 955496.18 frames. ], batch size: 51, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:44:34,629 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4793, 1.3468, 1.3557, 1.3682, 1.1614, 2.8725, 1.0823, 1.6009], device='cuda:6'), covar=tensor([0.3912, 0.3051, 0.2387, 0.2892, 0.1671, 0.0393, 0.2935, 0.1170], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0117, 0.0121, 0.0125, 0.0114, 0.0096, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:44:41,266 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165282.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:44:47,090 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.31 vs. limit=2.0 2023-03-27 11:45:03,732 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165314.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:45:16,148 INFO [finetune.py:976] (6/7) Epoch 29, batch 4950, loss[loss=0.1582, simple_loss=0.2364, pruned_loss=0.03996, over 4863.00 frames. ], tot_loss[loss=0.1715, simple_loss=0.2452, pruned_loss=0.04888, over 956151.91 frames. ], batch size: 34, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:45:23,887 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.29 vs. limit=2.0 2023-03-27 11:45:48,522 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165362.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:45:56,864 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.088e+01 1.449e+02 1.743e+02 2.087e+02 3.629e+02, threshold=3.486e+02, percent-clipped=0.0 2023-03-27 11:45:56,880 INFO [finetune.py:976] (6/7) Epoch 29, batch 5000, loss[loss=0.1499, simple_loss=0.2133, pruned_loss=0.04329, over 4363.00 frames. ], tot_loss[loss=0.1689, simple_loss=0.2422, pruned_loss=0.04778, over 954600.99 frames. ], batch size: 19, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:46:00,032 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.17 vs. limit=2.0 2023-03-27 11:46:02,093 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1704, 1.4090, 0.7170, 1.9999, 2.5639, 1.8519, 1.7197, 1.8698], device='cuda:6'), covar=tensor([0.1408, 0.2132, 0.2117, 0.1187, 0.1703, 0.1750, 0.1421, 0.1890], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0093, 0.0108, 0.0092, 0.0119, 0.0091, 0.0097, 0.0088], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:46:15,407 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165402.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:46:26,296 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.60 vs. limit=2.0 2023-03-27 11:46:30,165 INFO [finetune.py:976] (6/7) Epoch 29, batch 5050, loss[loss=0.1645, simple_loss=0.2358, pruned_loss=0.0466, over 4933.00 frames. ], tot_loss[loss=0.1662, simple_loss=0.239, pruned_loss=0.04667, over 954659.28 frames. ], batch size: 38, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:46:47,816 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165450.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:46:47,843 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.8972, 3.3962, 3.5861, 3.7928, 3.6788, 3.4006, 3.9625, 1.2132], device='cuda:6'), covar=tensor([0.0996, 0.1040, 0.1121, 0.1004, 0.1373, 0.1816, 0.0830, 0.6315], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0249, 0.0290, 0.0299, 0.0340, 0.0289, 0.0309, 0.0305], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:46:57,531 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6113, 1.4239, 1.2537, 1.6492, 1.6144, 1.6538, 1.0005, 1.3720], device='cuda:6'), covar=tensor([0.2285, 0.2152, 0.2120, 0.1716, 0.1714, 0.1284, 0.2636, 0.1996], device='cuda:6'), in_proj_covar=tensor([0.0247, 0.0212, 0.0217, 0.0199, 0.0246, 0.0191, 0.0219, 0.0207], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:47:02,937 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1716, 1.9847, 2.4037, 1.6287, 2.1457, 2.4165, 1.8839, 2.5522], device='cuda:6'), covar=tensor([0.1283, 0.2004, 0.1516, 0.2177, 0.0997, 0.1355, 0.2796, 0.0785], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0206, 0.0194, 0.0189, 0.0174, 0.0212, 0.0218, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:47:03,406 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.397e+02 1.700e+02 2.095e+02 3.650e+02, threshold=3.400e+02, percent-clipped=1.0 2023-03-27 11:47:03,422 INFO [finetune.py:976] (6/7) Epoch 29, batch 5100, loss[loss=0.1355, simple_loss=0.2137, pruned_loss=0.02866, over 4834.00 frames. ], tot_loss[loss=0.1632, simple_loss=0.2358, pruned_loss=0.04533, over 957478.72 frames. ], batch size: 30, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:47:06,336 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165479.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:47:36,277 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6813, 1.5112, 1.8796, 1.2866, 1.7015, 1.8488, 1.4536, 2.0139], device='cuda:6'), covar=tensor([0.1130, 0.1960, 0.1267, 0.1639, 0.0835, 0.1167, 0.2673, 0.0731], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0207, 0.0194, 0.0190, 0.0174, 0.0213, 0.0219, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:47:46,456 INFO [finetune.py:976] (6/7) Epoch 29, batch 5150, loss[loss=0.1581, simple_loss=0.2402, pruned_loss=0.03803, over 4813.00 frames. ], tot_loss[loss=0.1644, simple_loss=0.2368, pruned_loss=0.04602, over 955282.96 frames. ], batch size: 40, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:48:04,793 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165551.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:48:20,243 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.818e+01 1.606e+02 1.856e+02 2.249e+02 3.990e+02, threshold=3.713e+02, percent-clipped=3.0 2023-03-27 11:48:20,259 INFO [finetune.py:976] (6/7) Epoch 29, batch 5200, loss[loss=0.211, simple_loss=0.2689, pruned_loss=0.07657, over 4842.00 frames. ], tot_loss[loss=0.1675, simple_loss=0.2404, pruned_loss=0.04729, over 955129.77 frames. ], batch size: 49, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:48:45,764 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165612.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:48:53,937 INFO [finetune.py:976] (6/7) Epoch 29, batch 5250, loss[loss=0.1706, simple_loss=0.2433, pruned_loss=0.04898, over 4779.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2423, pruned_loss=0.04794, over 955107.60 frames. ], batch size: 26, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:49:00,465 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.9736, 3.4291, 3.6184, 3.7862, 3.7616, 3.4907, 4.0507, 1.4494], device='cuda:6'), covar=tensor([0.0821, 0.1000, 0.0940, 0.0998, 0.1179, 0.1712, 0.0740, 0.5668], device='cuda:6'), in_proj_covar=tensor([0.0357, 0.0249, 0.0290, 0.0299, 0.0340, 0.0289, 0.0309, 0.0305], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:49:26,774 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.319e+01 1.511e+02 1.772e+02 2.250e+02 3.554e+02, threshold=3.544e+02, percent-clipped=0.0 2023-03-27 11:49:26,790 INFO [finetune.py:976] (6/7) Epoch 29, batch 5300, loss[loss=0.161, simple_loss=0.2431, pruned_loss=0.03944, over 4738.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.2429, pruned_loss=0.04825, over 953340.12 frames. ], batch size: 27, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:49:52,104 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5786, 0.6995, 1.6815, 1.5482, 1.4833, 1.4088, 1.5020, 1.6373], device='cuda:6'), covar=tensor([0.4098, 0.3926, 0.3190, 0.3461, 0.4504, 0.3707, 0.4102, 0.2833], device='cuda:6'), in_proj_covar=tensor([0.0270, 0.0250, 0.0269, 0.0301, 0.0301, 0.0278, 0.0306, 0.0256], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:50:10,068 INFO [finetune.py:976] (6/7) Epoch 29, batch 5350, loss[loss=0.1528, simple_loss=0.2285, pruned_loss=0.03858, over 4822.00 frames. ], tot_loss[loss=0.17, simple_loss=0.2434, pruned_loss=0.04826, over 952040.24 frames. ], batch size: 33, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:50:34,270 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165750.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:50:59,728 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.046e+02 1.485e+02 1.788e+02 2.126e+02 3.231e+02, threshold=3.576e+02, percent-clipped=0.0 2023-03-27 11:50:59,744 INFO [finetune.py:976] (6/7) Epoch 29, batch 5400, loss[loss=0.1791, simple_loss=0.2278, pruned_loss=0.06523, over 4001.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2413, pruned_loss=0.04793, over 952331.14 frames. ], batch size: 17, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:51:02,245 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=165779.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:18,396 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=165803.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:24,190 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165811.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:33,004 INFO [finetune.py:976] (6/7) Epoch 29, batch 5450, loss[loss=0.1209, simple_loss=0.2007, pruned_loss=0.02049, over 4789.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2384, pruned_loss=0.04731, over 953552.03 frames. ], batch size: 29, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:51:34,299 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=165827.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:51:59,775 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=165864.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:52:06,354 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.128e+01 1.456e+02 1.721e+02 2.001e+02 5.868e+02, threshold=3.442e+02, percent-clipped=2.0 2023-03-27 11:52:06,370 INFO [finetune.py:976] (6/7) Epoch 29, batch 5500, loss[loss=0.1427, simple_loss=0.2183, pruned_loss=0.03351, over 4868.00 frames. ], tot_loss[loss=0.164, simple_loss=0.2353, pruned_loss=0.04634, over 954507.41 frames. ], batch size: 34, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:52:27,347 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=165907.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:52:39,039 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3824, 1.3992, 1.2082, 1.3844, 1.6935, 1.6281, 1.4123, 1.2367], device='cuda:6'), covar=tensor([0.0377, 0.0297, 0.0646, 0.0281, 0.0212, 0.0423, 0.0304, 0.0406], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0107, 0.0148, 0.0111, 0.0103, 0.0118, 0.0105, 0.0116], device='cuda:6'), out_proj_covar=tensor([7.9896e-05, 8.1574e-05, 1.1532e-04, 8.4622e-05, 7.9824e-05, 8.7253e-05, 7.7793e-05, 8.8235e-05], device='cuda:6') 2023-03-27 11:52:40,105 INFO [finetune.py:976] (6/7) Epoch 29, batch 5550, loss[loss=0.1848, simple_loss=0.2629, pruned_loss=0.0534, over 4900.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2384, pruned_loss=0.04795, over 953766.50 frames. ], batch size: 43, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:53:16,528 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7370, 1.6891, 1.4830, 1.8770, 2.2351, 1.8631, 1.8053, 1.4525], device='cuda:6'), covar=tensor([0.2107, 0.1939, 0.1885, 0.1586, 0.1607, 0.1187, 0.2110, 0.1857], device='cuda:6'), in_proj_covar=tensor([0.0250, 0.0214, 0.0218, 0.0201, 0.0249, 0.0193, 0.0220, 0.0209], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:53:22,713 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.003e+02 1.570e+02 1.926e+02 2.203e+02 4.633e+02, threshold=3.853e+02, percent-clipped=1.0 2023-03-27 11:53:22,729 INFO [finetune.py:976] (6/7) Epoch 29, batch 5600, loss[loss=0.1703, simple_loss=0.2478, pruned_loss=0.04646, over 4750.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2395, pruned_loss=0.04744, over 952946.60 frames. ], batch size: 27, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:53:24,066 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.52 vs. limit=2.0 2023-03-27 11:53:36,028 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.80 vs. limit=2.0 2023-03-27 11:53:50,122 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0034, 1.5904, 2.3766, 3.7823, 2.6119, 2.5586, 0.9979, 3.2223], device='cuda:6'), covar=tensor([0.1520, 0.1291, 0.1186, 0.0590, 0.0688, 0.1942, 0.1777, 0.0377], device='cuda:6'), in_proj_covar=tensor([0.0099, 0.0114, 0.0131, 0.0163, 0.0099, 0.0134, 0.0123, 0.0100], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 11:53:54,009 INFO [finetune.py:976] (6/7) Epoch 29, batch 5650, loss[loss=0.1391, simple_loss=0.2205, pruned_loss=0.02886, over 4768.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2421, pruned_loss=0.04777, over 953505.19 frames. ], batch size: 26, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:54:11,165 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166054.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:54:12,918 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.7680, 4.1562, 4.5047, 4.6294, 4.5369, 4.1916, 4.8320, 1.9835], device='cuda:6'), covar=tensor([0.0665, 0.0876, 0.0794, 0.0866, 0.0953, 0.1645, 0.0608, 0.5166], device='cuda:6'), in_proj_covar=tensor([0.0353, 0.0248, 0.0287, 0.0297, 0.0338, 0.0288, 0.0305, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:54:23,591 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.094e+01 1.435e+02 1.719e+02 2.053e+02 3.744e+02, threshold=3.437e+02, percent-clipped=0.0 2023-03-27 11:54:23,607 INFO [finetune.py:976] (6/7) Epoch 29, batch 5700, loss[loss=0.1506, simple_loss=0.205, pruned_loss=0.0481, over 4422.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2385, pruned_loss=0.04655, over 938204.26 frames. ], batch size: 19, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:54:32,082 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4406, 2.3753, 2.0948, 2.4347, 2.2539, 2.2058, 2.2515, 2.8824], device='cuda:6'), covar=tensor([0.3502, 0.3724, 0.3042, 0.3042, 0.3315, 0.2585, 0.3278, 0.1762], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0239, 0.0274, 0.0261, 0.0232, 0.0259, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:54:50,334 INFO [finetune.py:976] (6/7) Epoch 30, batch 0, loss[loss=0.2257, simple_loss=0.3, pruned_loss=0.07574, over 4926.00 frames. ], tot_loss[loss=0.2257, simple_loss=0.3, pruned_loss=0.07574, over 4926.00 frames. ], batch size: 42, lr: 2.82e-03, grad_scale: 32.0 2023-03-27 11:54:50,334 INFO [finetune.py:1001] (6/7) Computing validation loss 2023-03-27 11:54:52,268 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8418, 1.1439, 1.9709, 1.8753, 1.7360, 1.6480, 1.7581, 1.9415], device='cuda:6'), covar=tensor([0.4035, 0.4008, 0.3575, 0.3828, 0.4874, 0.3681, 0.4407, 0.2854], device='cuda:6'), in_proj_covar=tensor([0.0270, 0.0250, 0.0269, 0.0301, 0.0300, 0.0277, 0.0306, 0.0255], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:54:57,593 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1829, 1.9319, 1.8814, 1.8121, 1.8779, 1.9306, 1.9140, 2.5775], device='cuda:6'), covar=tensor([0.3712, 0.4252, 0.3252, 0.3882, 0.4220, 0.2533, 0.3922, 0.1804], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0239, 0.0274, 0.0261, 0.0232, 0.0259, 0.0239], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:55:08,621 INFO [finetune.py:1010] (6/7) Epoch 30, validation: loss=0.1598, simple_loss=0.2264, pruned_loss=0.04658, over 2265189.00 frames. 2023-03-27 11:55:08,623 INFO [finetune.py:1011] (6/7) Maximum memory allocated so far is 6481MB 2023-03-27 11:55:11,876 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166106.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:55:21,154 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166115.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 11:55:30,686 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0632, 1.9058, 2.0737, 1.4461, 1.9514, 2.1456, 2.0696, 1.6787], device='cuda:6'), covar=tensor([0.0531, 0.0585, 0.0566, 0.0761, 0.0834, 0.0560, 0.0551, 0.1053], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0139, 0.0141, 0.0119, 0.0129, 0.0140, 0.0140, 0.0164], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 11:55:43,730 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166147.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:55:52,372 INFO [finetune.py:976] (6/7) Epoch 30, batch 50, loss[loss=0.1654, simple_loss=0.245, pruned_loss=0.04291, over 4859.00 frames. ], tot_loss[loss=0.175, simple_loss=0.2479, pruned_loss=0.05101, over 215994.04 frames. ], batch size: 34, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:56:02,574 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166159.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:56:16,053 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.060e+01 1.403e+02 1.686e+02 1.983e+02 3.736e+02, threshold=3.372e+02, percent-clipped=1.0 2023-03-27 11:56:28,084 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9007, 1.8430, 1.7749, 1.8047, 1.6709, 3.7582, 1.7796, 2.1514], device='cuda:6'), covar=tensor([0.2757, 0.2107, 0.1808, 0.2014, 0.1273, 0.0249, 0.2581, 0.1042], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0121, 0.0124, 0.0113, 0.0095, 0.0094, 0.0095], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 11:56:34,675 INFO [finetune.py:976] (6/7) Epoch 30, batch 100, loss[loss=0.1549, simple_loss=0.2186, pruned_loss=0.04558, over 4739.00 frames. ], tot_loss[loss=0.1658, simple_loss=0.2379, pruned_loss=0.0468, over 380580.83 frames. ], batch size: 59, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:56:38,186 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166207.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:56:39,336 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166208.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:56:45,061 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.41 vs. limit=5.0 2023-03-27 11:57:07,205 INFO [finetune.py:976] (6/7) Epoch 30, batch 150, loss[loss=0.17, simple_loss=0.2451, pruned_loss=0.04743, over 4800.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2343, pruned_loss=0.04527, over 507315.15 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:57:08,968 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166255.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:57:21,830 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.042e+02 1.442e+02 1.804e+02 2.095e+02 4.016e+02, threshold=3.609e+02, percent-clipped=1.0 2023-03-27 11:57:22,142 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.33 vs. limit=2.0 2023-03-27 11:57:39,804 INFO [finetune.py:976] (6/7) Epoch 30, batch 200, loss[loss=0.1615, simple_loss=0.2404, pruned_loss=0.0413, over 4897.00 frames. ], tot_loss[loss=0.1624, simple_loss=0.2335, pruned_loss=0.0457, over 605477.26 frames. ], batch size: 43, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:57:42,819 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166307.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:58:14,803 INFO [finetune.py:976] (6/7) Epoch 30, batch 250, loss[loss=0.1649, simple_loss=0.2454, pruned_loss=0.04217, over 4803.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2369, pruned_loss=0.04668, over 684001.82 frames. ], batch size: 41, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:58:26,039 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166368.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:58:30,119 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.080e+02 1.508e+02 1.830e+02 2.341e+02 4.348e+02, threshold=3.661e+02, percent-clipped=2.0 2023-03-27 11:58:48,140 INFO [finetune.py:976] (6/7) Epoch 30, batch 300, loss[loss=0.1201, simple_loss=0.1961, pruned_loss=0.02202, over 4702.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2415, pruned_loss=0.04788, over 742985.85 frames. ], batch size: 23, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:58:50,084 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166406.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:58:51,659 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.35 vs. limit=2.0 2023-03-27 11:58:52,502 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166410.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 11:59:21,477 INFO [finetune.py:976] (6/7) Epoch 30, batch 350, loss[loss=0.1371, simple_loss=0.2179, pruned_loss=0.02816, over 4861.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2433, pruned_loss=0.04786, over 792415.24 frames. ], batch size: 31, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:59:22,660 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166454.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:25,728 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166459.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:37,685 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.476e+02 1.813e+02 2.161e+02 3.890e+02, threshold=3.626e+02, percent-clipped=2.0 2023-03-27 11:59:41,436 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166481.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:55,112 INFO [finetune.py:976] (6/7) Epoch 30, batch 400, loss[loss=0.158, simple_loss=0.2478, pruned_loss=0.03417, over 4754.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2436, pruned_loss=0.04769, over 828252.98 frames. ], batch size: 28, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 11:59:55,184 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166503.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:58,013 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166507.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:59,263 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166509.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 11:59:59,339 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.20 vs. limit=2.0 2023-03-27 12:00:24,597 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166531.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:00:35,356 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166542.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:00:45,988 INFO [finetune.py:976] (6/7) Epoch 30, batch 450, loss[loss=0.1754, simple_loss=0.2435, pruned_loss=0.05366, over 4750.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2434, pruned_loss=0.04776, over 856702.28 frames. ], batch size: 54, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:00:57,407 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166570.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:01:00,792 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.283e+01 1.415e+02 1.694e+02 2.083e+02 4.695e+02, threshold=3.388e+02, percent-clipped=2.0 2023-03-27 12:01:08,420 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0906, 2.0247, 1.6549, 1.8623, 2.0327, 1.7513, 2.2286, 2.0749], device='cuda:6'), covar=tensor([0.1287, 0.1663, 0.2659, 0.2357, 0.2443, 0.1666, 0.2698, 0.1524], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0191, 0.0238, 0.0254, 0.0251, 0.0210, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:01:22,341 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166592.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:01:32,866 INFO [finetune.py:976] (6/7) Epoch 30, batch 500, loss[loss=0.1608, simple_loss=0.2322, pruned_loss=0.04468, over 4791.00 frames. ], tot_loss[loss=0.1679, simple_loss=0.2409, pruned_loss=0.04747, over 880808.30 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:01:46,189 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.2336, 3.6877, 3.8759, 4.0853, 4.0244, 3.7347, 4.3348, 1.4414], device='cuda:6'), covar=tensor([0.0810, 0.0875, 0.0992, 0.0929, 0.1143, 0.1713, 0.0709, 0.5763], device='cuda:6'), in_proj_covar=tensor([0.0355, 0.0249, 0.0288, 0.0298, 0.0339, 0.0288, 0.0306, 0.0304], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:02:06,080 INFO [finetune.py:976] (6/7) Epoch 30, batch 550, loss[loss=0.1494, simple_loss=0.2069, pruned_loss=0.04593, over 4695.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2371, pruned_loss=0.04636, over 896995.75 frames. ], batch size: 23, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:02:12,199 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166663.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:02:20,415 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 7.871e+01 1.519e+02 1.786e+02 2.255e+02 3.834e+02, threshold=3.573e+02, percent-clipped=3.0 2023-03-27 12:02:39,362 INFO [finetune.py:976] (6/7) Epoch 30, batch 600, loss[loss=0.1959, simple_loss=0.262, pruned_loss=0.06484, over 4797.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2392, pruned_loss=0.04757, over 910252.80 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:02:43,711 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166710.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 12:02:45,126 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.40 vs. limit=2.0 2023-03-27 12:02:52,789 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6933, 1.7428, 2.3962, 1.9423, 1.9466, 3.5685, 1.6734, 1.8587], device='cuda:6'), covar=tensor([0.0901, 0.1448, 0.1419, 0.0835, 0.1235, 0.0264, 0.1244, 0.1417], device='cuda:6'), in_proj_covar=tensor([0.0075, 0.0083, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 12:03:00,928 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166735.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:03:12,733 INFO [finetune.py:976] (6/7) Epoch 30, batch 650, loss[loss=0.1905, simple_loss=0.2674, pruned_loss=0.05683, over 4809.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2416, pruned_loss=0.04815, over 916313.08 frames. ], batch size: 45, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:03:15,792 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166758.0, num_to_drop=1, layers_to_drop={0} 2023-03-27 12:03:19,475 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166764.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:03:21,894 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6017, 0.8138, 1.6744, 1.6272, 1.5164, 1.4512, 1.5586, 1.6552], device='cuda:6'), covar=tensor([0.3736, 0.3773, 0.3182, 0.3421, 0.4621, 0.3639, 0.4015, 0.2924], device='cuda:6'), in_proj_covar=tensor([0.0271, 0.0251, 0.0270, 0.0301, 0.0301, 0.0279, 0.0306, 0.0255], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:03:26,540 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.144e+02 1.552e+02 1.821e+02 2.218e+02 3.611e+02, threshold=3.642e+02, percent-clipped=1.0 2023-03-27 12:03:41,571 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166796.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:03:45,814 INFO [finetune.py:976] (6/7) Epoch 30, batch 700, loss[loss=0.1462, simple_loss=0.2177, pruned_loss=0.03736, over 4746.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2421, pruned_loss=0.04752, over 924908.61 frames. ], batch size: 27, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:03:45,927 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166803.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:00,346 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166825.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:08,580 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166837.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:18,071 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=166851.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:19,223 INFO [finetune.py:976] (6/7) Epoch 30, batch 750, loss[loss=0.1767, simple_loss=0.2554, pruned_loss=0.04906, over 4824.00 frames. ], tot_loss[loss=0.1691, simple_loss=0.2433, pruned_loss=0.04748, over 932835.26 frames. ], batch size: 33, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:04:27,160 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166865.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:33,233 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.605e+01 1.409e+02 1.752e+02 2.054e+02 3.389e+02, threshold=3.504e+02, percent-clipped=0.0 2023-03-27 12:04:41,701 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=166887.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:04:52,669 INFO [finetune.py:976] (6/7) Epoch 30, batch 800, loss[loss=0.157, simple_loss=0.2379, pruned_loss=0.03805, over 4868.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2419, pruned_loss=0.04646, over 937790.39 frames. ], batch size: 34, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:05:09,490 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=166929.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:05:21,681 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.5865, 2.4869, 2.1505, 1.1207, 2.2119, 1.9857, 1.8444, 2.3515], device='cuda:6'), covar=tensor([0.1026, 0.0721, 0.1510, 0.1960, 0.1415, 0.2064, 0.1932, 0.0849], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0189, 0.0201, 0.0180, 0.0209, 0.0210, 0.0223, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:05:32,102 INFO [finetune.py:976] (6/7) Epoch 30, batch 850, loss[loss=0.1508, simple_loss=0.2197, pruned_loss=0.04091, over 4795.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2404, pruned_loss=0.04638, over 941825.51 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:05:35,448 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.41 vs. limit=5.0 2023-03-27 12:05:42,368 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=166963.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:05:52,746 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8197, 1.1979, 0.8616, 1.6183, 2.2197, 1.5425, 1.5545, 1.5696], device='cuda:6'), covar=tensor([0.1621, 0.2632, 0.2133, 0.1457, 0.1979, 0.1937, 0.1650, 0.2351], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0093, 0.0119, 0.0092, 0.0097, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 12:05:53,842 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.050e+02 1.507e+02 1.769e+02 2.105e+02 4.355e+02, threshold=3.539e+02, percent-clipped=2.0 2023-03-27 12:06:07,547 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=166990.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:06:11,718 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.7298, 1.5311, 1.5426, 0.7751, 1.6678, 1.8068, 1.7105, 1.4825], device='cuda:6'), covar=tensor([0.0864, 0.0641, 0.0485, 0.0540, 0.0452, 0.0595, 0.0364, 0.0691], device='cuda:6'), in_proj_covar=tensor([0.0121, 0.0146, 0.0131, 0.0121, 0.0131, 0.0130, 0.0141, 0.0151], device='cuda:6'), out_proj_covar=tensor([8.8062e-05, 1.0470e-04, 9.2522e-05, 8.4448e-05, 9.1761e-05, 9.2035e-05, 9.9985e-05, 1.0798e-04], device='cuda:6') 2023-03-27 12:06:16,006 INFO [finetune.py:976] (6/7) Epoch 30, batch 900, loss[loss=0.1622, simple_loss=0.2223, pruned_loss=0.05104, over 4819.00 frames. ], tot_loss[loss=0.1647, simple_loss=0.2378, pruned_loss=0.04579, over 945276.38 frames. ], batch size: 25, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:06:23,180 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167011.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:06:59,309 INFO [finetune.py:976] (6/7) Epoch 30, batch 950, loss[loss=0.1741, simple_loss=0.237, pruned_loss=0.05556, over 4783.00 frames. ], tot_loss[loss=0.1642, simple_loss=0.2369, pruned_loss=0.04577, over 948127.12 frames. ], batch size: 26, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:07:09,750 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.88 vs. limit=5.0 2023-03-27 12:07:13,662 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.122e+02 1.522e+02 1.848e+02 2.259e+02 3.601e+02, threshold=3.696e+02, percent-clipped=1.0 2023-03-27 12:07:24,435 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167091.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:30,895 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=3.95 vs. limit=5.0 2023-03-27 12:07:31,399 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167100.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:33,102 INFO [finetune.py:976] (6/7) Epoch 30, batch 1000, loss[loss=0.1999, simple_loss=0.2768, pruned_loss=0.06149, over 4903.00 frames. ], tot_loss[loss=0.1665, simple_loss=0.2393, pruned_loss=0.04683, over 950873.46 frames. ], batch size: 43, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:07:33,854 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167104.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:44,463 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167120.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:07:52,685 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.40 vs. limit=5.0 2023-03-27 12:07:55,373 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167137.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:01,873 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.26 vs. limit=2.0 2023-03-27 12:08:06,885 INFO [finetune.py:976] (6/7) Epoch 30, batch 1050, loss[loss=0.142, simple_loss=0.2193, pruned_loss=0.03237, over 4911.00 frames. ], tot_loss[loss=0.1672, simple_loss=0.2408, pruned_loss=0.04681, over 952953.47 frames. ], batch size: 36, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:08:09,420 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7826, 1.5785, 2.3151, 3.5405, 2.3860, 2.3945, 1.1647, 3.0521], device='cuda:6'), covar=tensor([0.1766, 0.1407, 0.1299, 0.0589, 0.0820, 0.1627, 0.1887, 0.0456], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 12:08:11,866 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167161.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:14,776 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:14,806 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167165.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:21,246 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.094e+02 1.490e+02 1.818e+02 2.230e+02 3.401e+02, threshold=3.636e+02, percent-clipped=0.0 2023-03-27 12:08:27,353 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167185.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:28,586 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167187.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:08:39,895 INFO [finetune.py:976] (6/7) Epoch 30, batch 1100, loss[loss=0.2326, simple_loss=0.2998, pruned_loss=0.08274, over 4853.00 frames. ], tot_loss[loss=0.1686, simple_loss=0.2424, pruned_loss=0.04738, over 953784.11 frames. ], batch size: 44, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:08:45,821 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2626, 1.5059, 0.7787, 2.0166, 2.4764, 1.8761, 1.8928, 2.0035], device='cuda:6'), covar=tensor([0.1330, 0.2048, 0.2083, 0.1177, 0.1845, 0.1866, 0.1328, 0.1966], device='cuda:6'), in_proj_covar=tensor([0.0089, 0.0094, 0.0109, 0.0093, 0.0120, 0.0093, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 12:08:46,985 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167213.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:09:01,419 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167235.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:09:13,765 INFO [finetune.py:976] (6/7) Epoch 30, batch 1150, loss[loss=0.1839, simple_loss=0.2686, pruned_loss=0.04961, over 4849.00 frames. ], tot_loss[loss=0.1699, simple_loss=0.2435, pruned_loss=0.04811, over 953631.06 frames. ], batch size: 44, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:09:28,392 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.072e+02 1.511e+02 1.772e+02 2.131e+02 4.281e+02, threshold=3.544e+02, percent-clipped=3.0 2023-03-27 12:09:34,972 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167285.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:09:47,289 INFO [finetune.py:976] (6/7) Epoch 30, batch 1200, loss[loss=0.1632, simple_loss=0.2377, pruned_loss=0.0444, over 4935.00 frames. ], tot_loss[loss=0.1693, simple_loss=0.2429, pruned_loss=0.04783, over 953844.25 frames. ], batch size: 33, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:10:20,452 INFO [finetune.py:976] (6/7) Epoch 30, batch 1250, loss[loss=0.1553, simple_loss=0.2225, pruned_loss=0.04407, over 4734.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2401, pruned_loss=0.04705, over 954951.12 frames. ], batch size: 54, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:10:27,100 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4125, 1.2855, 1.6947, 2.4655, 1.6840, 2.2768, 0.9578, 2.2023], device='cuda:6'), covar=tensor([0.1891, 0.1455, 0.1185, 0.0707, 0.0925, 0.1147, 0.1652, 0.0597], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0115, 0.0133, 0.0164, 0.0100, 0.0135, 0.0125, 0.0101], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 12:10:36,983 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.068e+02 1.462e+02 1.711e+02 2.091e+02 4.240e+02, threshold=3.422e+02, percent-clipped=1.0 2023-03-27 12:10:56,489 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167391.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:06,192 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7688, 1.2477, 0.7888, 1.6342, 2.1599, 1.5665, 1.5632, 1.7493], device='cuda:6'), covar=tensor([0.1337, 0.1957, 0.1865, 0.1144, 0.1896, 0.1931, 0.1312, 0.1764], device='cuda:6'), in_proj_covar=tensor([0.0090, 0.0094, 0.0109, 0.0093, 0.0120, 0.0092, 0.0098, 0.0089], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0004, 0.0003], device='cuda:6') 2023-03-27 12:11:09,178 INFO [finetune.py:976] (6/7) Epoch 30, batch 1300, loss[loss=0.1385, simple_loss=0.2056, pruned_loss=0.0357, over 4784.00 frames. ], tot_loss[loss=0.1646, simple_loss=0.2368, pruned_loss=0.04617, over 954848.44 frames. ], batch size: 29, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:11:24,565 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167420.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:32,411 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167432.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:38,997 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167439.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:11:47,729 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3246, 1.3412, 1.1935, 1.3284, 1.6211, 1.5373, 1.3170, 1.2160], device='cuda:6'), covar=tensor([0.0368, 0.0329, 0.0654, 0.0339, 0.0217, 0.0422, 0.0359, 0.0421], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0107, 0.0149, 0.0112, 0.0103, 0.0118, 0.0104, 0.0116], device='cuda:6'), out_proj_covar=tensor([7.9990e-05, 8.1833e-05, 1.1603e-04, 8.4729e-05, 7.9501e-05, 8.7181e-05, 7.7443e-05, 8.7955e-05], device='cuda:6') 2023-03-27 12:11:55,875 INFO [finetune.py:976] (6/7) Epoch 30, batch 1350, loss[loss=0.1559, simple_loss=0.2324, pruned_loss=0.03975, over 4891.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2374, pruned_loss=0.04645, over 955207.32 frames. ], batch size: 32, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:11:58,262 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167456.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:00,700 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167460.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:07,071 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167468.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:11,227 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.213e+01 1.429e+02 1.665e+02 1.960e+02 3.889e+02, threshold=3.329e+02, percent-clipped=2.0 2023-03-27 12:12:23,209 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167493.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:12:29,690 INFO [finetune.py:976] (6/7) Epoch 30, batch 1400, loss[loss=0.1448, simple_loss=0.2096, pruned_loss=0.04, over 4830.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2392, pruned_loss=0.047, over 954166.88 frames. ], batch size: 25, lr: 2.81e-03, grad_scale: 64.0 2023-03-27 12:13:02,946 INFO [finetune.py:976] (6/7) Epoch 30, batch 1450, loss[loss=0.1688, simple_loss=0.2355, pruned_loss=0.051, over 4844.00 frames. ], tot_loss[loss=0.1671, simple_loss=0.2404, pruned_loss=0.0469, over 953134.77 frames. ], batch size: 49, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:13:12,963 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([3.5425, 3.0369, 2.9289, 1.5107, 3.1174, 2.5079, 2.3885, 3.0100], device='cuda:6'), covar=tensor([0.0872, 0.0846, 0.1816, 0.2181, 0.1395, 0.2201, 0.2024, 0.1063], device='cuda:6'), in_proj_covar=tensor([0.0172, 0.0190, 0.0203, 0.0182, 0.0210, 0.0212, 0.0225, 0.0197], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:13:19,228 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.032e+02 1.535e+02 1.844e+02 2.174e+02 4.375e+02, threshold=3.689e+02, percent-clipped=4.0 2023-03-27 12:13:25,327 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167585.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:13:36,789 INFO [finetune.py:976] (6/7) Epoch 30, batch 1500, loss[loss=0.1843, simple_loss=0.2557, pruned_loss=0.0564, over 4884.00 frames. ], tot_loss[loss=0.1688, simple_loss=0.2427, pruned_loss=0.04743, over 954668.69 frames. ], batch size: 35, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:13:52,845 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.44 vs. limit=2.0 2023-03-27 12:13:57,379 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167633.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:14:10,430 INFO [finetune.py:976] (6/7) Epoch 30, batch 1550, loss[loss=0.1419, simple_loss=0.2016, pruned_loss=0.04113, over 4415.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.2434, pruned_loss=0.04774, over 954633.13 frames. ], batch size: 19, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:14:26,674 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.931e+01 1.416e+02 1.751e+02 2.105e+02 4.024e+02, threshold=3.503e+02, percent-clipped=1.0 2023-03-27 12:14:44,011 INFO [finetune.py:976] (6/7) Epoch 30, batch 1600, loss[loss=0.1458, simple_loss=0.2152, pruned_loss=0.03818, over 4903.00 frames. ], tot_loss[loss=0.1678, simple_loss=0.2414, pruned_loss=0.04706, over 955340.58 frames. ], batch size: 36, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:14:54,753 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167719.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:17,589 INFO [finetune.py:976] (6/7) Epoch 30, batch 1650, loss[loss=0.1315, simple_loss=0.2088, pruned_loss=0.02708, over 4818.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.2392, pruned_loss=0.04677, over 955600.16 frames. ], batch size: 41, lr: 2.81e-03, grad_scale: 32.0 2023-03-27 12:15:19,515 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167756.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:21,897 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=167760.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:26,695 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=167767.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:32,529 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.722e+01 1.428e+02 1.633e+02 1.926e+02 4.440e+02, threshold=3.266e+02, percent-clipped=1.0 2023-03-27 12:15:36,066 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167780.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:39,497 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1484, 1.9869, 2.0730, 1.3899, 2.0528, 2.1471, 2.0775, 1.7185], device='cuda:6'), covar=tensor([0.0589, 0.0713, 0.0706, 0.0886, 0.0762, 0.0664, 0.0679, 0.1270], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0138, 0.0141, 0.0119, 0.0129, 0.0139, 0.0139, 0.0163], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:15:41,293 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=167788.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:15:55,580 INFO [finetune.py:976] (6/7) Epoch 30, batch 1700, loss[loss=0.1487, simple_loss=0.2232, pruned_loss=0.03706, over 4813.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2384, pruned_loss=0.04671, over 955740.79 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:15:56,725 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167804.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:16:03,659 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=167808.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:16:25,662 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=167828.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:16:42,118 INFO [finetune.py:976] (6/7) Epoch 30, batch 1750, loss[loss=0.2269, simple_loss=0.2939, pruned_loss=0.07998, over 4276.00 frames. ], tot_loss[loss=0.1666, simple_loss=0.2395, pruned_loss=0.04688, over 954870.56 frames. ], batch size: 66, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:16:47,666 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0673, 1.9505, 2.0303, 1.4837, 1.9898, 2.1330, 2.1679, 1.6065], device='cuda:6'), covar=tensor([0.0601, 0.0683, 0.0736, 0.0809, 0.0724, 0.0650, 0.0580, 0.1221], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0118, 0.0128, 0.0138, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:16:54,724 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4075, 1.2674, 1.6982, 2.2747, 1.5457, 2.0015, 1.2579, 1.9974], device='cuda:6'), covar=tensor([0.1445, 0.1240, 0.0922, 0.0642, 0.0877, 0.1859, 0.1076, 0.0520], device='cuda:6'), in_proj_covar=tensor([0.0100, 0.0116, 0.0133, 0.0164, 0.0100, 0.0136, 0.0125, 0.0102], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0004, 0.0004, 0.0003, 0.0004, 0.0003, 0.0003], device='cuda:6') 2023-03-27 12:16:56,584 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.6859, 1.5965, 1.3826, 1.5756, 1.9703, 1.9482, 1.6476, 1.4648], device='cuda:6'), covar=tensor([0.0316, 0.0335, 0.0663, 0.0338, 0.0205, 0.0533, 0.0333, 0.0438], device='cuda:6'), in_proj_covar=tensor([0.0104, 0.0107, 0.0150, 0.0112, 0.0103, 0.0119, 0.0105, 0.0116], device='cuda:6'), out_proj_covar=tensor([7.9997e-05, 8.2062e-05, 1.1646e-04, 8.5244e-05, 7.9962e-05, 8.7409e-05, 7.8003e-05, 8.8186e-05], device='cuda:6') 2023-03-27 12:16:57,036 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.117e+02 1.486e+02 1.829e+02 2.140e+02 4.770e+02, threshold=3.658e+02, percent-clipped=2.0 2023-03-27 12:17:25,522 INFO [finetune.py:976] (6/7) Epoch 30, batch 1800, loss[loss=0.182, simple_loss=0.2555, pruned_loss=0.05422, over 4887.00 frames. ], tot_loss[loss=0.169, simple_loss=0.2427, pruned_loss=0.04764, over 955285.07 frames. ], batch size: 35, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:17:35,646 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.39 vs. limit=5.0 2023-03-27 12:17:43,990 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1895, 2.0817, 1.8248, 1.9994, 2.0132, 1.9471, 2.0129, 2.7967], device='cuda:6'), covar=tensor([0.3544, 0.4113, 0.3107, 0.3656, 0.3801, 0.2510, 0.3721, 0.1440], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0265, 0.0240, 0.0276, 0.0263, 0.0233, 0.0260, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:17:58,717 INFO [finetune.py:976] (6/7) Epoch 30, batch 1850, loss[loss=0.2076, simple_loss=0.2756, pruned_loss=0.06982, over 4900.00 frames. ], tot_loss[loss=0.1713, simple_loss=0.245, pruned_loss=0.04886, over 954679.38 frames. ], batch size: 37, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:18:13,649 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.002e+02 1.458e+02 1.787e+02 2.144e+02 3.700e+02, threshold=3.573e+02, percent-clipped=1.0 2023-03-27 12:18:33,090 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.4023, 1.4257, 1.1693, 1.4162, 1.7758, 1.6374, 1.4424, 1.2454], device='cuda:6'), covar=tensor([0.0404, 0.0354, 0.0755, 0.0389, 0.0243, 0.0637, 0.0415, 0.0548], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0107, 0.0149, 0.0112, 0.0103, 0.0118, 0.0105, 0.0115], device='cuda:6'), out_proj_covar=tensor([7.9536e-05, 8.1431e-05, 1.1563e-04, 8.4699e-05, 7.9521e-05, 8.6768e-05, 7.7503e-05, 8.7454e-05], device='cuda:6') 2023-03-27 12:18:33,565 INFO [finetune.py:976] (6/7) Epoch 30, batch 1900, loss[loss=0.178, simple_loss=0.2511, pruned_loss=0.0525, over 4859.00 frames. ], tot_loss[loss=0.1718, simple_loss=0.2451, pruned_loss=0.04925, over 956158.93 frames. ], batch size: 44, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:18:55,022 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168035.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:19:01,994 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.3486, 2.1483, 1.9119, 2.2298, 2.1008, 2.1262, 2.0935, 2.9466], device='cuda:6'), covar=tensor([0.3587, 0.4400, 0.3241, 0.3764, 0.4160, 0.2462, 0.3573, 0.1626], device='cuda:6'), in_proj_covar=tensor([0.0291, 0.0265, 0.0240, 0.0276, 0.0264, 0.0233, 0.0260, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:19:07,329 INFO [finetune.py:976] (6/7) Epoch 30, batch 1950, loss[loss=0.1521, simple_loss=0.2293, pruned_loss=0.03742, over 4921.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2429, pruned_loss=0.04813, over 955760.70 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:19:08,068 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4230, 2.2457, 1.7905, 2.3261, 2.3242, 2.0649, 2.6373, 2.3965], device='cuda:6'), covar=tensor([0.1284, 0.2019, 0.3096, 0.2450, 0.2526, 0.1635, 0.2904, 0.1648], device='cuda:6'), in_proj_covar=tensor([0.0191, 0.0191, 0.0239, 0.0253, 0.0251, 0.0210, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:19:21,547 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168075.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:19:22,071 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.065e+02 1.458e+02 1.758e+02 2.118e+02 3.555e+02, threshold=3.516e+02, percent-clipped=0.0 2023-03-27 12:19:30,474 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168088.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:19:35,825 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=168096.0, num_to_drop=1, layers_to_drop={3} 2023-03-27 12:19:37,205 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=96, metric=1.38 vs. limit=2.0 2023-03-27 12:19:40,898 INFO [finetune.py:976] (6/7) Epoch 30, batch 2000, loss[loss=0.1815, simple_loss=0.2403, pruned_loss=0.0614, over 4813.00 frames. ], tot_loss[loss=0.1674, simple_loss=0.2404, pruned_loss=0.04718, over 955849.85 frames. ], batch size: 39, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:19:54,028 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168123.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:20:01,902 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168136.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:20:14,571 INFO [finetune.py:976] (6/7) Epoch 30, batch 2050, loss[loss=0.1753, simple_loss=0.2489, pruned_loss=0.05083, over 4820.00 frames. ], tot_loss[loss=0.1649, simple_loss=0.2369, pruned_loss=0.04644, over 952063.01 frames. ], batch size: 33, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:20:24,866 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.1371, 1.7550, 2.3569, 1.6932, 2.1620, 2.2865, 1.5610, 2.4294], device='cuda:6'), covar=tensor([0.1054, 0.1901, 0.1335, 0.1799, 0.0822, 0.1160, 0.2924, 0.0678], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0207, 0.0193, 0.0189, 0.0174, 0.0212, 0.0219, 0.0198], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:20:28,425 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.0282, 1.9873, 1.7042, 2.1378, 2.4727, 2.1046, 1.7240, 1.6698], device='cuda:6'), covar=tensor([0.2120, 0.1859, 0.1895, 0.1538, 0.1371, 0.1133, 0.2230, 0.1856], device='cuda:6'), in_proj_covar=tensor([0.0251, 0.0215, 0.0219, 0.0202, 0.0249, 0.0195, 0.0222, 0.0209], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:20:29,510 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.016e+02 1.484e+02 1.729e+02 2.115e+02 4.273e+02, threshold=3.459e+02, percent-clipped=3.0 2023-03-27 12:20:29,644 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.8483, 2.8531, 2.8540, 2.0972, 2.8375, 3.0085, 3.1676, 2.5147], device='cuda:6'), covar=tensor([0.0558, 0.0534, 0.0600, 0.0794, 0.0598, 0.0614, 0.0504, 0.0998], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0137, 0.0141, 0.0119, 0.0128, 0.0138, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:20:47,507 INFO [finetune.py:976] (6/7) Epoch 30, batch 2100, loss[loss=0.1745, simple_loss=0.2579, pruned_loss=0.04556, over 4839.00 frames. ], tot_loss[loss=0.1664, simple_loss=0.238, pruned_loss=0.04739, over 952774.95 frames. ], batch size: 47, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:21:19,161 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168233.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:21:31,918 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.3281, 1.7501, 1.3455, 1.4931, 1.8597, 1.9144, 1.7076, 1.7249], device='cuda:6'), covar=tensor([0.0611, 0.0292, 0.0562, 0.0337, 0.0337, 0.0458, 0.0419, 0.0343], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0107, 0.0149, 0.0111, 0.0102, 0.0118, 0.0105, 0.0115], device='cuda:6'), out_proj_covar=tensor([7.9703e-05, 8.1330e-05, 1.1559e-04, 8.4698e-05, 7.9205e-05, 8.6514e-05, 7.7565e-05, 8.7434e-05], device='cuda:6') 2023-03-27 12:21:41,933 INFO [finetune.py:976] (6/7) Epoch 30, batch 2150, loss[loss=0.1537, simple_loss=0.2249, pruned_loss=0.04124, over 4762.00 frames. ], tot_loss[loss=0.1697, simple_loss=0.242, pruned_loss=0.04867, over 952089.46 frames. ], batch size: 28, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:22:00,991 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.149e+02 1.551e+02 1.892e+02 2.224e+02 4.404e+02, threshold=3.784e+02, percent-clipped=3.0 2023-03-27 12:22:12,543 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=168294.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:22:18,786 INFO [finetune.py:976] (6/7) Epoch 30, batch 2200, loss[loss=0.1598, simple_loss=0.2288, pruned_loss=0.0454, over 4709.00 frames. ], tot_loss[loss=0.1714, simple_loss=0.244, pruned_loss=0.04942, over 952093.75 frames. ], batch size: 23, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:22:29,022 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([0.9998, 0.9937, 0.9737, 1.0701, 1.1676, 1.1481, 0.9972, 0.9876], device='cuda:6'), covar=tensor([0.0423, 0.0323, 0.0736, 0.0308, 0.0291, 0.0449, 0.0387, 0.0405], device='cuda:6'), in_proj_covar=tensor([0.0103, 0.0106, 0.0148, 0.0111, 0.0102, 0.0117, 0.0104, 0.0115], device='cuda:6'), out_proj_covar=tensor([7.9634e-05, 8.1154e-05, 1.1539e-04, 8.4615e-05, 7.8958e-05, 8.6296e-05, 7.7407e-05, 8.7262e-05], device='cuda:6') 2023-03-27 12:23:02,560 INFO [finetune.py:976] (6/7) Epoch 30, batch 2250, loss[loss=0.1914, simple_loss=0.2673, pruned_loss=0.05772, over 4128.00 frames. ], tot_loss[loss=0.1726, simple_loss=0.2457, pruned_loss=0.04979, over 952753.29 frames. ], batch size: 65, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:23:17,400 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168375.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:23:17,907 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.029e+01 1.513e+02 1.826e+02 2.132e+02 3.584e+02, threshold=3.652e+02, percent-clipped=0.0 2023-03-27 12:23:28,076 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168391.0, num_to_drop=1, layers_to_drop={2} 2023-03-27 12:23:36,282 INFO [finetune.py:976] (6/7) Epoch 30, batch 2300, loss[loss=0.1376, simple_loss=0.2146, pruned_loss=0.03031, over 4847.00 frames. ], tot_loss[loss=0.1716, simple_loss=0.2453, pruned_loss=0.04897, over 952617.22 frames. ], batch size: 49, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:23:49,961 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168423.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:23:50,002 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168423.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:23:51,199 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2295, 2.9411, 2.7719, 1.3214, 3.0414, 2.2185, 0.5823, 1.9154], device='cuda:6'), covar=tensor([0.2430, 0.2384, 0.2056, 0.3591, 0.1493, 0.1296, 0.4560, 0.1972], device='cuda:6'), in_proj_covar=tensor([0.0151, 0.0179, 0.0159, 0.0130, 0.0162, 0.0124, 0.0148, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 12:24:09,557 INFO [finetune.py:976] (6/7) Epoch 30, batch 2350, loss[loss=0.1425, simple_loss=0.2156, pruned_loss=0.03473, over 4857.00 frames. ], tot_loss[loss=0.1698, simple_loss=0.243, pruned_loss=0.04826, over 953588.90 frames. ], batch size: 31, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:24:21,868 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168471.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:24:24,774 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.258e+01 1.458e+02 1.699e+02 2.116e+02 4.301e+02, threshold=3.398e+02, percent-clipped=1.0 2023-03-27 12:24:25,828 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.05 vs. limit=5.0 2023-03-27 12:24:42,044 INFO [finetune.py:976] (6/7) Epoch 30, batch 2400, loss[loss=0.1605, simple_loss=0.2284, pruned_loss=0.04633, over 4907.00 frames. ], tot_loss[loss=0.1668, simple_loss=0.2395, pruned_loss=0.0471, over 953837.49 frames. ], batch size: 36, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:24:49,117 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.4930, 2.4437, 2.0144, 2.7730, 2.5112, 2.2203, 2.8943, 2.6108], device='cuda:6'), covar=tensor([0.1160, 0.2192, 0.2606, 0.2026, 0.1981, 0.1386, 0.2512, 0.1379], device='cuda:6'), in_proj_covar=tensor([0.0190, 0.0191, 0.0238, 0.0252, 0.0250, 0.0209, 0.0216, 0.0204], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:25:15,071 INFO [finetune.py:976] (6/7) Epoch 30, batch 2450, loss[loss=0.1764, simple_loss=0.2461, pruned_loss=0.05337, over 4088.00 frames. ], tot_loss[loss=0.165, simple_loss=0.2371, pruned_loss=0.04642, over 952751.94 frames. ], batch size: 65, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:25:21,628 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.2938, 1.3367, 1.7387, 1.4572, 1.5169, 3.0039, 1.2708, 1.4487], device='cuda:6'), covar=tensor([0.1131, 0.1925, 0.1057, 0.1049, 0.1760, 0.0277, 0.1589, 0.1937], device='cuda:6'), in_proj_covar=tensor([0.0074, 0.0082, 0.0073, 0.0076, 0.0091, 0.0081, 0.0085, 0.0080], device='cuda:6'), out_proj_covar=tensor([0.0004, 0.0004, 0.0004, 0.0004, 0.0005, 0.0004, 0.0005, 0.0005], device='cuda:6') 2023-03-27 12:25:30,927 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.020e+02 1.487e+02 1.838e+02 2.245e+02 3.083e+02, threshold=3.676e+02, percent-clipped=0.0 2023-03-27 12:25:39,370 INFO [zipformer.py:1188] (6/7) warmup_begin=1333.3, warmup_end=2000.0, batch_count=168589.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:25:48,868 INFO [finetune.py:976] (6/7) Epoch 30, batch 2500, loss[loss=0.172, simple_loss=0.2377, pruned_loss=0.05311, over 4123.00 frames. ], tot_loss[loss=0.1659, simple_loss=0.2387, pruned_loss=0.04655, over 952350.63 frames. ], batch size: 65, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:26:00,751 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.8324, 1.7476, 1.5087, 1.5948, 2.1128, 2.1556, 1.7572, 1.5943], device='cuda:6'), covar=tensor([0.0318, 0.0348, 0.0679, 0.0381, 0.0233, 0.0439, 0.0373, 0.0437], device='cuda:6'), in_proj_covar=tensor([0.0102, 0.0105, 0.0147, 0.0111, 0.0101, 0.0116, 0.0104, 0.0114], device='cuda:6'), out_proj_covar=tensor([7.9125e-05, 8.0366e-05, 1.1445e-04, 8.3964e-05, 7.8370e-05, 8.5480e-05, 7.6823e-05, 8.6434e-05], device='cuda:6') 2023-03-27 12:26:14,827 INFO [scaling.py:679] (6/7) Whitening: num_groups=1, num_channels=384, metric=4.74 vs. limit=5.0 2023-03-27 12:26:27,844 INFO [finetune.py:976] (6/7) Epoch 30, batch 2550, loss[loss=0.17, simple_loss=0.2407, pruned_loss=0.04959, over 4925.00 frames. ], tot_loss[loss=0.168, simple_loss=0.2417, pruned_loss=0.04719, over 954807.22 frames. ], batch size: 33, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:26:55,337 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.157e+02 1.538e+02 1.821e+02 2.163e+02 4.307e+02, threshold=3.641e+02, percent-clipped=2.0 2023-03-27 12:26:57,248 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.5513, 1.4239, 1.3803, 1.4869, 1.1924, 3.4305, 1.4700, 1.6147], device='cuda:6'), covar=tensor([0.4165, 0.3250, 0.2594, 0.2962, 0.1764, 0.0259, 0.2571, 0.1320], device='cuda:6'), in_proj_covar=tensor([0.0132, 0.0116, 0.0120, 0.0124, 0.0113, 0.0095, 0.0094, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 12:27:09,675 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168691.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:27:17,864 INFO [finetune.py:976] (6/7) Epoch 30, batch 2600, loss[loss=0.1786, simple_loss=0.2535, pruned_loss=0.05187, over 4892.00 frames. ], tot_loss[loss=0.1694, simple_loss=0.243, pruned_loss=0.04785, over 954850.74 frames. ], batch size: 35, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:27:17,984 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.7859, 1.5028, 1.8173, 1.2617, 1.6371, 1.8838, 1.4208, 2.0886], device='cuda:6'), covar=tensor([0.1090, 0.2143, 0.1357, 0.1794, 0.0974, 0.1371, 0.2960, 0.0782], device='cuda:6'), in_proj_covar=tensor([0.0188, 0.0204, 0.0190, 0.0187, 0.0172, 0.0209, 0.0216, 0.0195], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:27:36,304 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2167, 2.1639, 1.8420, 2.1442, 2.0389, 1.9953, 2.0420, 2.7901], device='cuda:6'), covar=tensor([0.3804, 0.4171, 0.3401, 0.3599, 0.3822, 0.2476, 0.3673, 0.1672], device='cuda:6'), in_proj_covar=tensor([0.0290, 0.0264, 0.0240, 0.0275, 0.0263, 0.0233, 0.0260, 0.0240], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:27:44,434 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168739.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:28:01,792 INFO [finetune.py:976] (6/7) Epoch 30, batch 2650, loss[loss=0.1514, simple_loss=0.2375, pruned_loss=0.0327, over 4807.00 frames. ], tot_loss[loss=0.1696, simple_loss=0.2435, pruned_loss=0.04792, over 952891.19 frames. ], batch size: 39, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:28:21,694 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 1.069e+02 1.510e+02 1.724e+02 1.979e+02 3.263e+02, threshold=3.448e+02, percent-clipped=0.0 2023-03-27 12:28:28,287 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.7115, 3.6675, 3.5065, 1.8156, 3.8013, 2.8170, 0.7814, 2.6261], device='cuda:6'), covar=tensor([0.2533, 0.1916, 0.1495, 0.3018, 0.0979, 0.1103, 0.4344, 0.1400], device='cuda:6'), in_proj_covar=tensor([0.0152, 0.0179, 0.0160, 0.0130, 0.0163, 0.0124, 0.0148, 0.0126], device='cuda:6'), out_proj_covar=tensor([0.0003, 0.0003, 0.0003, 0.0002, 0.0003, 0.0002, 0.0003, 0.0002], device='cuda:6') 2023-03-27 12:28:43,722 INFO [finetune.py:976] (6/7) Epoch 30, batch 2700, loss[loss=0.1589, simple_loss=0.2279, pruned_loss=0.04498, over 4804.00 frames. ], tot_loss[loss=0.1695, simple_loss=0.243, pruned_loss=0.04803, over 952716.46 frames. ], batch size: 40, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:28:53,851 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([1.9949, 1.8517, 1.7519, 1.9196, 1.8021, 4.6726, 1.9641, 2.1898], device='cuda:6'), covar=tensor([0.3778, 0.2875, 0.2260, 0.2614, 0.1501, 0.0171, 0.2158, 0.1075], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0116, 0.0120, 0.0123, 0.0113, 0.0094, 0.0093, 0.0094], device='cuda:6'), out_proj_covar=tensor([0.0006, 0.0006, 0.0005, 0.0006, 0.0005, 0.0004, 0.0005, 0.0004], device='cuda:6') 2023-03-27 12:29:09,759 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([4.5695, 3.9970, 4.1717, 4.4210, 4.3593, 4.0659, 4.6909, 1.5851], device='cuda:6'), covar=tensor([0.0781, 0.0795, 0.0888, 0.0947, 0.1143, 0.1620, 0.0661, 0.5671], device='cuda:6'), in_proj_covar=tensor([0.0354, 0.0248, 0.0287, 0.0298, 0.0337, 0.0287, 0.0307, 0.0303], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:29:17,012 INFO [finetune.py:976] (6/7) Epoch 30, batch 2750, loss[loss=0.1655, simple_loss=0.2409, pruned_loss=0.04511, over 4891.00 frames. ], tot_loss[loss=0.1663, simple_loss=0.2391, pruned_loss=0.04671, over 952102.50 frames. ], batch size: 32, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:29:31,791 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2015, 2.0087, 2.1516, 1.5415, 2.1422, 2.2502, 2.2334, 1.7979], device='cuda:6'), covar=tensor([0.0531, 0.0718, 0.0673, 0.0826, 0.0721, 0.0677, 0.0548, 0.1137], device='cuda:6'), in_proj_covar=tensor([0.0131, 0.0138, 0.0142, 0.0119, 0.0129, 0.0139, 0.0139, 0.0162], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:29:32,260 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.461e+01 1.436e+02 1.670e+02 1.989e+02 2.987e+02, threshold=3.340e+02, percent-clipped=0.0 2023-03-27 12:29:41,553 INFO [zipformer.py:1188] (6/7) warmup_begin=2000.0, warmup_end=2666.7, batch_count=168889.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:29:50,504 INFO [finetune.py:976] (6/7) Epoch 30, batch 2800, loss[loss=0.1444, simple_loss=0.2151, pruned_loss=0.03688, over 4795.00 frames. ], tot_loss[loss=0.1651, simple_loss=0.2368, pruned_loss=0.04667, over 954617.27 frames. ], batch size: 29, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:30:13,016 INFO [zipformer.py:1188] (6/7) warmup_begin=666.7, warmup_end=1333.3, batch_count=168937.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:30:23,988 INFO [finetune.py:976] (6/7) Epoch 30, batch 2850, loss[loss=0.2389, simple_loss=0.2925, pruned_loss=0.09263, over 4936.00 frames. ], tot_loss[loss=0.1652, simple_loss=0.2366, pruned_loss=0.04695, over 956246.72 frames. ], batch size: 33, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:30:33,496 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168967.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:30:34,691 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=168969.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 12:30:38,879 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 8.606e+01 1.456e+02 1.728e+02 2.104e+02 5.266e+02, threshold=3.457e+02, percent-clipped=2.0 2023-03-27 12:30:57,839 INFO [finetune.py:976] (6/7) Epoch 30, batch 2900, loss[loss=0.1748, simple_loss=0.2473, pruned_loss=0.05119, over 4877.00 frames. ], tot_loss[loss=0.167, simple_loss=0.2392, pruned_loss=0.04741, over 956152.12 frames. ], batch size: 31, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:31:14,732 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=169028.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:31:15,944 INFO [zipformer.py:1188] (6/7) warmup_begin=3333.3, warmup_end=4000.0, batch_count=169030.0, num_to_drop=1, layers_to_drop={1} 2023-03-27 12:31:18,989 INFO [zipformer.py:2441] (6/7) attn_weights_entropy = tensor([2.2589, 2.0259, 1.4897, 0.6209, 1.7010, 1.9565, 1.8127, 1.8721], device='cuda:6'), covar=tensor([0.0960, 0.0869, 0.1741, 0.2188, 0.1487, 0.2176, 0.2298, 0.0905], device='cuda:6'), in_proj_covar=tensor([0.0171, 0.0188, 0.0201, 0.0180, 0.0209, 0.0210, 0.0224, 0.0196], device='cuda:6'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002, 0.0002], device='cuda:6') 2023-03-27 12:31:31,769 INFO [finetune.py:976] (6/7) Epoch 30, batch 2950, loss[loss=0.1871, simple_loss=0.2643, pruned_loss=0.05494, over 4815.00 frames. ], tot_loss[loss=0.1684, simple_loss=0.2415, pruned_loss=0.04772, over 954191.84 frames. ], batch size: 38, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:31:47,443 INFO [scaling.py:679] (6/7) Whitening: num_groups=8, num_channels=192, metric=1.58 vs. limit=2.0 2023-03-27 12:31:49,076 INFO [optim.py:369] (6/7) Clipping_scale=2.0, grad-norm quartiles 9.406e+01 1.615e+02 1.887e+02 2.255e+02 4.054e+02, threshold=3.773e+02, percent-clipped=1.0 2023-03-27 12:31:53,294 INFO [zipformer.py:1188] (6/7) warmup_begin=2666.7, warmup_end=3333.3, batch_count=169082.0, num_to_drop=0, layers_to_drop=set() 2023-03-27 12:32:19,263 INFO [finetune.py:976] (6/7) Epoch 30, batch 3000, loss[loss=0.1943, simple_loss=0.2677, pruned_loss=0.0604, over 4750.00 frames. ], tot_loss[loss=0.1707, simple_loss=0.244, pruned_loss=0.04872, over 956017.56 frames. ], batch size: 27, lr: 2.80e-03, grad_scale: 32.0 2023-03-27 12:32:19,263 INFO [finetune.py:1001] (6/7) Computing validation loss