2024-08-19 11:41:26,228 INFO [inference_speaker.py:250] Evaluation started 2024-08-19 11:41:26,229 INFO [inference_speaker.py:252] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': 'e400fa3b456faf8afe0ee5bfe572946b4921a3db', 'k2-git-date': 'Sat Jul 15 04:21:50 2023', 'lhotse-version': '1.16.0', 'torch-version': '2.0.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.9', 'icefall-git-branch': 'multi_KD_with_wenet', 'icefall-git-sha1': '0d2af1df-dirty', 'icefall-git-date': 'Wed Aug 14 17:27:16 2024', 'icefall-path': '/xy/mnt/yangxiaoyu/workspace/icefall_multi_KD', 'k2-path': '/root/anaconda3/lib/python3.9/site-packages/k2/__init__.py', 'lhotse-path': '/root/anaconda3/lib/python3.9/site-packages/lhotse/__init__.py', 'hostname': 'NGK_xiaoyu'}, 'epoch': 30, 'iter': 392000, 'avg': 5, 'use_averaged_model': False, 'exp_dir': PosixPath('multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16'), 'trained_with_distillation': True, 'freeze_encoder': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'causal': True, 'chunk_size': '32', 'left_context_frames': '256', 'use_transducer': True, 'use_ctc': False, 'speaker_input_idx': 2, 'whisper_dim': 1280, 'use_task_id': False, 'num_codebooks': 32, 'mvq_kd_layer_idx': -1, 'use_subsampled_output': True, 'delta_t': 0, 'full_libri': True, 'mini_libri': False, 'use_libriheavy': False, 'libriheavy_subset': 'small', 'use_librispeech': False, 'use_wenetspeech': False, 'use_audioset': False, 'audioset_subset': 'balanced', 'use_voxceleb': False, 'voxceleb_subset': 'vox1', 'use_fma': False, 'fma_subset': 'large', 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 400, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'drop_last': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'enable_audioset': False, 'use_musan_separately': False, 'input_strategy': 'PrecomputedFeatures', 'drop_features': False, 'return_audio': False, 'use_beats': True, 'use_ecapa': True, 'use_whisper': True, 'whisper_mvq': False, 'beats_ckpt': 'data/models/BEATs/BEATs_iter3_plus_AS2M_finetuned_on_AS2M_cpt2.pt', 'whisper_version': 'small.en', 'use_mert': False, 'lm_vocab_size': 500, 'lm_epoch': 7, 'lm_avg': 1, 'lm_exp_dir': None, 'rnn_lm_embedding_dim': 2048, 'rnn_lm_hidden_dim': 2048, 'rnn_lm_num_layers': 3, 'rnn_lm_tie_weights': True, 'transformer_lm_exp_dir': None, 'transformer_lm_dim_feedforward': 2048, 'transformer_lm_encoder_dim': 768, 'transformer_lm_embedding_dim': 768, 'transformer_lm_nhead': 8, 'transformer_lm_num_layers': 16, 'transformer_lm_tie_weights': True, 'res_dir': PosixPath('multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16/inference_speaker_verification'), 'suffix': 'iter-392000-avg-5-chunk-size-32-left-context-frames-256'} 2024-08-19 11:41:26,229 INFO [inference_speaker.py:258] About to create model 2024-08-19 11:41:26,622 INFO [inference_speaker.py:293] averaging ['multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16/checkpoint-392000.pt', 'multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16/checkpoint-388000.pt', 'multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16/checkpoint-384000.pt', 'multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16/checkpoint-380000.pt', 'multi_KD/exp_causal1_delta6KD_LS1_5fold+wenetspech0_0fold+as_unbalanced1+vox_1_vox2_base_lr_0.045_use_beats_1_scale_1.0_use_ecapa_1_layer_2_scale_10.0_1_scale_1.0_specaug0_musan0_with_task_ID_stop_early1_share_asr1_md1500_amp_bf16/checkpoint-376000.pt'] 2024-08-19 11:41:48,955 INFO [inference_speaker.py:360] Number of model parameters: 66484678 2024-08-19 11:41:48,955 INFO [kd_datamodule.py:840] About to get the test set of voxceleb1 set. 2024-08-19 11:41:49,026 INFO [fetching.py:128] Fetch hyperparams.yaml: Using existing file/symlink in pretrained_models/EncoderClassifier-8f6f7fdaa9628acf73e21ad1f99d5f83/hyperparams.yaml. 2024-08-19 11:41:49,028 INFO [fetching.py:162] Fetch custom.py: Delegating to Huggingface hub, source speechbrain/spkrec-ecapa-voxceleb. 2024-08-19 11:41:59,216 INFO [fetching.py:128] Fetch embedding_model.ckpt: Using existing file/symlink in pretrained_models/EncoderClassifier-8f6f7fdaa9628acf73e21ad1f99d5f83/embedding_model.ckpt. 2024-08-19 11:41:59,222 INFO [fetching.py:128] Fetch mean_var_norm_emb.ckpt: Using existing file/symlink in pretrained_models/EncoderClassifier-8f6f7fdaa9628acf73e21ad1f99d5f83/mean_var_norm_emb.ckpt. 2024-08-19 11:41:59,224 INFO [fetching.py:128] Fetch classifier.ckpt: Using existing file/symlink in pretrained_models/EncoderClassifier-8f6f7fdaa9628acf73e21ad1f99d5f83/classifier.ckpt. 2024-08-19 11:41:59,226 INFO [fetching.py:128] Fetch label_encoder.txt: Using existing file/symlink in pretrained_models/EncoderClassifier-8f6f7fdaa9628acf73e21ad1f99d5f83/label_encoder.ckpt. 2024-08-19 11:41:59,226 INFO [parameter_transfer.py:299] Loading pretrained files for: embedding_model, mean_var_norm_emb, classifier, label_encoder 2024-08-19 11:41:59,722 INFO [kd_datamodule.py:120] Successfully load ecapa-tdnn model. 2024-08-19 11:42:04,952 INFO [inference_speaker.py:187] Processed 61 cuts already. 2024-08-19 11:42:09,454 INFO [inference_speaker.py:187] Processed 826 cuts already. 2024-08-19 11:42:13,680 INFO [inference_speaker.py:187] Processed 1651 cuts already. 2024-08-19 11:42:16,702 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([1.2895, 1.6964, 1.2117, 1.0003], device='cuda:0') 2024-08-19 11:42:17,879 INFO [inference_speaker.py:187] Processed 2538 cuts already. 2024-08-19 11:42:20,863 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([2.9940, 1.4857, 2.1762, 1.0844, 1.5119, 2.3143, 2.4554, 1.6178], device='cuda:0') 2024-08-19 11:42:22,228 INFO [inference_speaker.py:187] Processed 3263 cuts already. 2024-08-19 11:42:23,116 INFO [zipformer.py:1877] name=None, attn_weights_entropy = tensor([4.0105, 3.8201, 2.9445, 3.4112], device='cuda:0') 2024-08-19 11:42:26,661 INFO [inference_speaker.py:187] Processed 4068 cuts already. 2024-08-19 11:42:30,288 INFO [inference_speaker.py:187] Processed 4874 cuts already. 2024-08-19 11:42:30,322 INFO [inference_speaker.py:188] Finish collecting speaker embeddings 2024-08-19 11:42:30,323 INFO [inference_speaker.py:195] -----------For testing set: VoxCeleb1-cleaned------------ 2024-08-19 11:42:30,354 INFO [inference_speaker.py:199] A total of 37611 pairs. 2024-08-19 11:42:31,308 INFO [inference_speaker.py:222] Operating threshold for VoxCeleb1-cleaned: 0.2915, FAR: 0.0111, FRR: 0.0111, EER: 0.0111 2024-08-19 11:42:31,308 INFO [inference_speaker.py:223] Finished testing for VoxCeleb1-cleaned 2024-08-19 11:42:31,311 INFO [inference_speaker.py:392] Done!