icefall-libri-giga-pruned-transducer-stateless7-streaming-2023-04-04
/
decoding-results
/modified_beam_search
/log-decode-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4-2023-04-04-10-58-05
2023-04-04 10:58:05,936 INFO [decode.py:650] Decoding started | |
2023-04-04 10:58:05,936 INFO [decode.py:656] Device: cuda:0 | |
2023-04-04 10:58:05,939 INFO [decode.py:666] {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.22', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '96c9a2aece2a3a7633da07740e24fa3d96f5498c', 'k2-git-date': 'Thu Nov 10 08:14:02 2022', 'lhotse-version': '1.13.0.dev+git.527d964.clean', 'torch-version': '1.12.1', 'torch-cuda-available': True, 'torch-cuda-version': '11.6', 'python-version': '3.8', 'icefall-git-branch': 'zipformer_libri_small_models', 'icefall-git-sha1': '1a059bd-dirty', 'icefall-git-date': 'Mon Apr 3 23:17:14 2023', 'icefall-path': '/ceph-data4/yangxiaoyu/softwares/icefall_development/icefall_small_models', 'k2-path': '/ceph-data4/yangxiaoyu/softwares/anaconda3/envs/k2_latest/lib/python3.8/site-packages/k2/__init__.py', 'lhotse-path': '/ceph-data4/yangxiaoyu/softwares/lhotse_development/lhotse_random_padding_left/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-1-1220091118-57c4d55446-mlpzc', 'IP address': '10.177.22.19'}, 'epoch': 99, 'iter': 0, 'avg': 1, 'use_averaged_model': False, 'exp_dir': PosixPath('pruned_transducer_stateless7_streaming_multi/exp'), 'bpe_model': 'data/lang_bpe_500/bpe.model', 'lang_dir': PosixPath('data/lang_bpe_500'), 'decoding_method': 'modified_beam_search', 'beam_size': 4, 'beam': 20.0, 'ngram_lm_scale': 0.01, 'max_contexts': 8, 'max_states': 64, 'context_size': 2, 'max_sym_per_frame': 1, 'num_paths': 200, 'nbest_scale': 0.5, 'num_encoder_layers': '2,4,3,2,4', 'feedforward_dims': '1024,1024,2048,2048,1024', 'nhead': '8,8,8,8,8', 'encoder_dims': '384,384,384,384,384', 'attention_dims': '192,192,192,192,192', 'encoder_unmasked_dims': '256,256,256,256,256', 'zipformer_downsampling_factors': '1,2,4,8,2', 'cnn_module_kernels': '31,31,31,31,31', 'decoder_dim': 512, 'joiner_dim': 512, 'short_chunk_size': 50, 'num_left_chunks': 4, 'decode_chunk_len': 32, 'max_duration': 600, 'bucketing_sampler': True, 'num_buckets': 30, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'on_the_fly_num_workers': 0, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'manifest_dir': PosixPath('data/fbank'), 'on_the_fly_feats': False, 'res_dir': PosixPath('pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search'), 'suffix': 'epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4', 'blank_id': 0, 'unk_id': 2, 'vocab_size': 500} | |
2023-04-04 10:58:05,939 INFO [decode.py:668] About to create model | |
2023-04-04 10:58:06,547 INFO [zipformer.py:405] At encoder stack 4, which has downsampling_factor=2, we will combine the outputs of layers 1 and 3, with downsampling_factors=2 and 8. | |
2023-04-04 10:58:06,561 INFO [checkpoint.py:112] Loading checkpoint from pruned_transducer_stateless7_streaming_multi/exp/epoch-99.pt | |
2023-04-04 10:58:08,653 INFO [decode.py:773] Number of model parameters: 70369391 | |
2023-04-04 10:58:08,654 INFO [librispeech.py:58] About to get test-clean cuts from data/fbank/librispeech_cuts_test-clean.jsonl.gz | |
2023-04-04 10:58:08,656 INFO [librispeech.py:63] About to get test-other cuts from data/fbank/librispeech_cuts_test-other.jsonl.gz | |
2023-04-04 10:58:16,408 INFO [decode.py:561] batch 0/?, cuts processed until now is 26 | |
2023-04-04 10:59:43,573 INFO [decode.py:561] batch 20/?, cuts processed until now is 1545 | |
2023-04-04 11:00:34,246 INFO [decode.py:561] batch 40/?, cuts processed until now is 2375 | |
2023-04-04 11:01:02,890 INFO [decode.py:575] The transcripts are stored in pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/recogs-test-clean-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:01:02,986 INFO [utils.py:558] [test-clean-beam_size_4] %WER 2.40% [1262 / 52576, 146 ins, 91 del, 1025 sub ] | |
2023-04-04 11:01:03,191 INFO [decode.py:586] Wrote detailed error stats to pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/errs-test-clean-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:01:03,192 INFO [decode.py:600] | |
For test-clean, WER of different settings are: | |
beam_size_4 2.4 best for test-clean | |
2023-04-04 11:01:09,240 INFO [decode.py:561] batch 0/?, cuts processed until now is 30 | |
2023-04-04 11:02:02,573 INFO [zipformer.py:2401] attn_weights_entropy = tensor([1.4093, 1.2808, 3.8227, 3.5314, 3.3854, 3.6476, 3.8247, 3.3508], | |
device='cuda:0'), covar=tensor([0.6423, 0.5574, 0.0887, 0.1561, 0.1109, 0.1311, 0.0547, 0.1300], | |
device='cuda:0'), in_proj_covar=tensor([0.0336, 0.0311, 0.0437, 0.0450, 0.0361, 0.0415, 0.0346, 0.0390], | |
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002], | |
device='cuda:0') | |
2023-04-04 11:02:27,917 INFO [zipformer.py:2401] attn_weights_entropy = tensor([1.4260, 1.2570, 3.9083, 3.6061, 3.4586, 3.7981, 3.9308, 3.3958], | |
device='cuda:0'), covar=tensor([0.6695, 0.5766, 0.0924, 0.1731, 0.1194, 0.0962, 0.0470, 0.1250], | |
device='cuda:0'), in_proj_covar=tensor([0.0336, 0.0311, 0.0437, 0.0450, 0.0361, 0.0415, 0.0346, 0.0390], | |
device='cuda:0'), out_proj_covar=tensor([0.0002, 0.0002, 0.0002, 0.0002, 0.0001, 0.0002, 0.0001, 0.0002], | |
device='cuda:0') | |
2023-04-04 11:02:35,598 INFO [decode.py:561] batch 20/?, cuts processed until now is 1771 | |
2023-04-04 11:05:15,238 INFO [decode.py:561] batch 40/?, cuts processed until now is 2696 | |
2023-04-04 11:06:27,564 INFO [decode.py:575] The transcripts are stored in pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/recogs-test-other-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:06:27,666 INFO [utils.py:558] [test-other-beam_size_4] %WER 6.00% [3139 / 52343, 338 ins, 267 del, 2534 sub ] | |
2023-04-04 11:06:27,878 INFO [decode.py:586] Wrote detailed error stats to pruned_transducer_stateless7_streaming_multi/exp/modified_beam_search/errs-test-other-epoch-99-avg-1-streaming-chunk-size-32-modified_beam_search-beam-size-4.txt | |
2023-04-04 11:06:27,879 INFO [decode.py:600] | |
For test-other, WER of different settings are: | |
beam_size_4 6.0 best for test-other | |
2023-04-04 11:06:27,879 INFO [decode.py:805] Done! | |