2023-12-26 10:03:10,531 INFO [inference_audio_tagging.py:316] Evaluation started 2023-12-26 10:03:10,531 INFO [inference_audio_tagging.py:318] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'e400fa3b456faf8afe0ee5bfe572946b4921a3db', 'k2-git-date': 'Sat Jul 15 04:21:50 2023', 'lhotse-version': '1.16.0', 'torch-version': '2.0.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.9', 'icefall-git-branch': 'multi_KD', 'icefall-git-sha1': '85777b80-clean', 'icefall-git-date': 'Mon Dec 25 12:09:21 2023', 'icefall-path': '/xy/mnt/yangxiaoyu/workspace/icefall_multi_KD', 'k2-path': '/root/anaconda3/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/root/anaconda3/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'NGK_xiaoyu'}, 'epoch': 19, 'iter': 0, 'avg': 3, 'use_averaged_model': True, 'exp_dir': PosixPath('multi_KD/exp_finetune_asr_full_libri1_6-fold_do_AT1_KD_as_unbalanced_scale2.0_do_SV1_only_vox2_scale10.0_freeze_12000steps_encoder_lr_scale0.2_freeze_3layers_ecapa_lr_scale0.2_init_3_tasks_pretrain_avg_musan0_sync_task_md1500'), 'trained_with_distillation': True, 'trained_with_multitask': False, 'freeze_encoder': False, 'num_events': 527, 'eval_subset': 'eval', 'vocab_size': 500, 'blank_id': 0, 'context_size': 2, 'do_audio_tagging': True, 'use_encoder_projection': True, 'encoder_projection_dim': 2560, 'freezing_encoder_layer_index': '-1', 'freeze_encoder_steps': -1, 'save_logits': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'use_transducer': True, 'use_ctc': False, 'speaker_input_idx': 2, 'whisper_dim': 768, 'num_codebooks': 32, 'mvq_kd_layer_idx': -1, 'use_subsampled_output': True, 'full_libri': True, 'mini_libri': False, 'use_voxceleb': False, 'voxceleb_subset': 'vox1', 'use_libriheavy': False, 'libriheavy_subset': 'small', 'use_audioset': False, 'audioset_subset': 'balanced', 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 500, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'enable_audioset': False, 'use_musan_separately': False, 'input_strategy': 'PrecomputedFeatures', 'drop_features': False, 'return_audio': False, 'use_beats': True, 'use_ecapa': False, 'use_whisper': False, 'whisper_mvq': False, 'beats_ckpt': 'data/models/BEATs/BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt', 'whisper_version': 'small.en', 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('multi_KD/exp_finetune_asr_full_libri1_6-fold_do_AT1_KD_as_unbalanced_scale2.0_do_SV1_only_vox2_scale10.0_freeze_12000steps_encoder_lr_scale0.2_freeze_3layers_ecapa_lr_scale0.2_init_3_tasks_pretrain_avg_musan0_sync_task_md1500/inference_audio_tagging'), 'suffix': 'epoch-19-avg-3-use-averaged-model'} 2023-12-26 10:03:10,531 INFO [inference_audio_tagging.py:324] About to create model 2023-12-26 10:03:10,858 INFO [inference_audio_tagging.py:403] Calculating the averaged model over epoch range from 16 (excluded) to 19 2023-12-26 10:03:19,913 INFO [inference_audio_tagging.py:421] Number of model parameters: 64264454 2023-12-26 10:03:19,914 INFO [kd_datamodule.py:840] About to get the audioset eval cuts. 2023-12-26 10:03:19,953 INFO [kd_datamodule.py:534] About to create dev dataset 2023-12-26 10:03:20,276 INFO [kd_datamodule.py:555] About to create dev dataloader 2023-12-26 10:03:26,222 INFO [inference_audio_tagging.py:289] Processed 100 cuts already. 2023-12-26 10:03:31,094 INFO [inference_audio_tagging.py:289] Processed 1100 cuts already. 2023-12-26 10:03:35,903 INFO [inference_audio_tagging.py:289] Processed 2101 cuts already. 2023-12-26 10:03:40,757 INFO [inference_audio_tagging.py:289] Processed 3101 cuts already. 2023-12-26 10:03:45,527 INFO [inference_audio_tagging.py:289] Processed 4101 cuts already. 2023-12-26 10:03:48,684 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.0400, 3.6383, 3.6122, 3.5454], device='cuda:0') 2023-12-26 10:03:50,336 INFO [inference_audio_tagging.py:289] Processed 5101 cuts already. 2023-12-26 10:03:55,136 INFO [inference_audio_tagging.py:289] Processed 6101 cuts already. 2023-12-26 10:03:59,957 INFO [inference_audio_tagging.py:289] Processed 7101 cuts already. 2023-12-26 10:04:04,855 INFO [inference_audio_tagging.py:289] Processed 8101 cuts already. 2023-12-26 10:04:09,494 INFO [inference_audio_tagging.py:289] Processed 9101 cuts already. 2023-12-26 10:04:14,169 INFO [inference_audio_tagging.py:289] Processed 10101 cuts already. 2023-12-26 10:04:17,067 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.3741, 3.8612, 4.2465, 3.9674], device='cuda:0') 2023-12-26 10:04:18,966 INFO [inference_audio_tagging.py:289] Processed 11101 cuts already. 2023-12-26 10:04:23,887 INFO [inference_audio_tagging.py:289] Processed 12101 cuts already. 2023-12-26 10:04:28,697 INFO [inference_audio_tagging.py:289] Processed 13101 cuts already. 2023-12-26 10:04:32,892 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.5960, 4.0711, 3.6240, 4.0984], device='cuda:0') 2023-12-26 10:04:33,481 INFO [inference_audio_tagging.py:289] Processed 14101 cuts already. 2023-12-26 10:04:33,643 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.3473, 3.7937, 4.1801, 3.9400], device='cuda:0') 2023-12-26 10:04:38,252 INFO [inference_audio_tagging.py:289] Processed 15101 cuts already. 2023-12-26 10:04:38,514 INFO [inference_audio_tagging.py:290] Finish collecting audio logits 2023-12-26 10:04:39,855 INFO [inference_audio_tagging.py:454] mAP for audioset eval is: 0.4593073711338618 2023-12-26 10:04:39,855 INFO [inference_audio_tagging.py:456] Done