icefall-multi-kd-finetune-amp-fp16
/
inference_audio_tagging
/log-decode-epoch-6-avg-1-use-averaged-model-2023-12-25-10-06-13
2023-12-25 10:06:13,198 INFO [inference_audio_tagging.py:316] Evaluation started | |
2023-12-25 10:06:13,199 INFO [inference_audio_tagging.py:318] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'e400fa3b456faf8afe0ee5bfe572946b4921a3db', 'k2-git-date': 'Sat Jul 15 04:21:50 2023', 'lhotse-version': '1.16.0', 'torch-version': '2.0.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.9', 'icefall-git-branch': 'multi_KD', 'icefall-git-sha1': 'a77761c2-dirty', 'icefall-git-date': 'Tue Nov 28 15:54:58 2023', 'icefall-path': '/xy/mnt/yangxiaoyu/workspace/icefall_multi_KD', 'k2-path': '/root/anaconda3/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/root/anaconda3/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'NGK_xiaoyu'}, 'epoch': 6, 'iter': 0, 'avg': 1, 'use_averaged_model': True, 'exp_dir': PosixPath('multi_KD/exp_finetune_asr_full_libri1_6-fold_do_AT1_KD_as_unbalanced_scale2.0_do_SV1_only_vox2_scale10.0_freeze_12000steps_encoder_lr_scale0.2_freeze_3layers_ecapa_lr_scale0.2_init_3_tasks_pretrain_avg_musan0_sync_task_md1500'), 'trained_with_distillation': True, 'trained_with_multitask': False, 'freeze_encoder': False, 'num_events': 527, 'eval_subset': 'eval', 'vocab_size': 500, 'blank_id': 0, 'context_size': 2, 'do_audio_tagging': True, 'use_encoder_projection': True, 'encoder_projection_dim': 2560, 'freezing_encoder_layer_index': '-1', 'freeze_encoder_steps': -1, 'save_logits': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'speaker_input_idx': 2, 'whisper_dim': 768, 'num_codebooks': 32, 'mvq_kd_layer_idx': -1, 'use_subsampled_output': True, 'full_libri': True, 'mini_libri': False, 'use_voxceleb': False, 'voxceleb_subset': 'vox1', 'use_libriheavy': False, 'libriheavy_subset': 'small', 'use_audioset': False, 'audioset_subset': 'balanced', 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 300, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'enable_audioset': False, 'use_musan_separately': False, 'input_strategy': 'PrecomputedFeatures', 'drop_features': False, 'return_audio': False, 'use_beats': True, 'use_ecapa': False, 'use_whisper': False, 'whisper_mvq': False, 'beats_ckpt': 'data/models/BEATs/BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt', 'whisper_version': 'small.en', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('multi_KD/exp_finetune_asr_full_libri1_6-fold_do_AT1_KD_as_unbalanced_scale2.0_do_SV1_only_vox2_scale10.0_freeze_12000steps_encoder_lr_scale0.2_freeze_3layers_ecapa_lr_scale0.2_init_3_tasks_pretrain_avg_musan0_sync_task_md1500/inference_audio_tagging'), 'suffix': 'epoch-6-avg-1-use-averaged-model'} | |
2023-12-25 10:06:13,200 INFO [inference_audio_tagging.py:324] About to create model | |
2023-12-25 10:06:13,522 INFO [inference_audio_tagging.py:403] Calculating the averaged model over epoch range from 5 (excluded) to 6 | |
2023-12-25 10:06:19,764 INFO [inference_audio_tagging.py:421] Number of model parameters: 64264454 | |
2023-12-25 10:06:19,764 INFO [kd_datamodule.py:840] About to get the audioset eval cuts. | |
2023-12-25 10:06:19,803 INFO [kd_datamodule.py:534] About to create dev dataset | |
2023-12-25 10:06:20,131 INFO [kd_datamodule.py:555] About to create dev dataloader | |
2023-12-25 10:06:24,723 INFO [inference_audio_tagging.py:289] Processed 60 cuts already. | |
2023-12-25 10:06:26,808 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([3.8254, 3.4243, 3.2198, 3.5137, 3.4297, 3.0046, 3.3423, 3.2737], | |
device='cuda:0') | |
2023-12-25 10:06:27,767 INFO [inference_audio_tagging.py:289] Processed 660 cuts already. | |
2023-12-25 10:06:29,910 INFO [inference_audio_tagging.py:289] Processed 1260 cuts already. | |
2023-12-25 10:06:32,073 INFO [inference_audio_tagging.py:289] Processed 1860 cuts already. | |
2023-12-25 10:06:34,211 INFO [inference_audio_tagging.py:289] Processed 2460 cuts already. | |
2023-12-25 10:06:36,694 INFO [inference_audio_tagging.py:289] Processed 3060 cuts already. | |
2023-12-25 10:06:36,791 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.0555, 3.7489, 3.3850, 3.4800], device='cuda:0') | |
2023-12-25 10:06:37,008 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.3660, 3.9445, 4.1906, 3.9111], device='cuda:0') | |
2023-12-25 10:06:37,730 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([5.1147, 4.8782, 5.0371, 4.3041], device='cuda:0') | |
2023-12-25 10:06:37,786 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.0898, 3.7864, 3.6233, 3.4858], device='cuda:0') | |
2023-12-25 10:06:38,890 INFO [inference_audio_tagging.py:289] Processed 3660 cuts already. | |
2023-12-25 10:06:41,176 INFO [inference_audio_tagging.py:289] Processed 4260 cuts already. | |
2023-12-25 10:06:44,284 INFO [inference_audio_tagging.py:289] Processed 4860 cuts already. | |
2023-12-25 10:06:46,397 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.6098, 4.0172, 4.0227, 4.1828], device='cuda:0') | |
2023-12-25 10:06:47,553 INFO [inference_audio_tagging.py:289] Processed 5460 cuts already. | |
2023-12-25 10:06:50,632 INFO [inference_audio_tagging.py:289] Processed 6060 cuts already. | |
2023-12-25 10:06:53,708 INFO [inference_audio_tagging.py:289] Processed 6660 cuts already. | |
2023-12-25 10:06:56,055 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([3.8681, 3.0779, 2.4980, 3.1714, 3.1076, 3.0846, 2.9262, 2.8655], | |
device='cuda:0') | |
2023-12-25 10:06:56,723 INFO [inference_audio_tagging.py:289] Processed 7260 cuts already. | |
2023-12-25 10:06:59,794 INFO [inference_audio_tagging.py:289] Processed 7860 cuts already. | |
2023-12-25 10:07:01,915 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.3817, 3.9486, 4.1995, 3.9401], device='cuda:0') | |
2023-12-25 10:07:02,888 INFO [inference_audio_tagging.py:289] Processed 8460 cuts already. | |
2023-12-25 10:07:06,092 INFO [inference_audio_tagging.py:289] Processed 9060 cuts already. | |
2023-12-25 10:07:09,190 INFO [inference_audio_tagging.py:289] Processed 9660 cuts already. | |
2023-12-25 10:07:10,204 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.8573, 4.8617, 5.0784, 4.9506], device='cuda:0') | |
2023-12-25 10:07:11,743 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.5401, 3.7877, 4.0349, 3.8908], device='cuda:0') | |
2023-12-25 10:07:12,270 INFO [inference_audio_tagging.py:289] Processed 10260 cuts already. | |
2023-12-25 10:07:15,395 INFO [inference_audio_tagging.py:289] Processed 10860 cuts already. | |
2023-12-25 10:07:18,527 INFO [inference_audio_tagging.py:289] Processed 11460 cuts already. | |
2023-12-25 10:07:21,790 INFO [inference_audio_tagging.py:289] Processed 12060 cuts already. | |
2023-12-25 10:07:22,648 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.8544, 4.8766, 5.1067, 5.0647], device='cuda:0') | |
2023-12-25 10:07:24,935 INFO [inference_audio_tagging.py:289] Processed 12660 cuts already. | |
2023-12-25 10:07:28,078 INFO [inference_audio_tagging.py:289] Processed 13260 cuts already. | |
2023-12-25 10:07:31,260 INFO [inference_audio_tagging.py:289] Processed 13860 cuts already. | |
2023-12-25 10:07:34,311 INFO [inference_audio_tagging.py:289] Processed 14460 cuts already. | |
2023-12-25 10:07:35,237 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.9146, 3.9152, 4.0627, 3.5919], device='cuda:0') | |
2023-12-25 10:07:37,557 INFO [inference_audio_tagging.py:289] Processed 15060 cuts already. | |
2023-12-25 10:07:37,873 INFO [inference_audio_tagging.py:290] Finish collecting audio logits | |
2023-12-25 10:07:39,225 INFO [inference_audio_tagging.py:454] mAP for audioset eval is: 0.45474173173967275 | |
2023-12-25 10:07:39,225 INFO [inference_audio_tagging.py:456] Done | |