icefall-promptasr-with-context-libriheavy-zipformer-BERT-2023-10-10
/
log
/log-train-2023-10-06-13-16-43-0
2023-10-06 13:16:43,589 INFO [train_bert_encoder.py:1464] (0/4) Training started | |
2023-10-06 13:16:43,594 INFO [train_bert_encoder.py:1485] (0/4) Device: cuda:0 | |
2023-10-06 13:16:43,597 INFO [train_bert_encoder.py:1494] (0/4) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2b2ac14b326d61d79d04e53fbd69b1ff6d630411', 'k2-git-date': 'Thu Aug 24 05:58:26 2023', 'lhotse-version': '1.17.0.dev+git.3dde48dc.clean', 'torch-version': '2.0.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.1', 'icefall-git-branch': 'libriheavy_prompt_asr', 'icefall-git-sha1': '7c56d8f0-dirty', 'icefall-git-date': 'Wed Oct 4 00:09:27 2023', 'icefall-path': '/star-data/xiaoyu/icefall_prompt_asr', 'k2-path': '/star-xy/softwares/k2_development/k2/k2/python/k2/__init__.py', 'lhotse-path': '/star-xy/softwares/lhotse_development/lhotse/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0423201334-6587bbc68d-tn554', 'IP address': '10.177.74.211'}, 'world_size': 4, 'master_port': 13994, 'tensorboard': True, 'num_epochs': 60, 'start_epoch': 21, 'start_batch': 0, 'exp_dir': PosixPath('zipformer_prompt_asr/exp_medium_BERT_memory_layer_0_memory_drop_0.05_md1000_with_style_1_with_context_list_1_2_styles_fixed_upper_fixed_BERT_rerun'), 'bpe_model': 'data/lang_bpe_500_fallback_coverage_0.99/bpe.model', 'base_lr': 0.045, 'lr_batches': 7500, 'lr_epochs': 3.5, 'ref_duration': 600, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 4000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'use_style_prompt': True, 'pre_text_shuffle_prob': 0.05, 'style_text_shuffle_prob': 0.2, 'prompt_mask_prob': 0.05, 'forced_upper_pre_text': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'memory_dropout_rate': 0.05, 'memory_layer': 0, 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'context_size': 2, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'freeze_text_encoder': True, 'text_encoder_type': 'BERT', 'text_encoder_adapter': False, 'context_injection': False, 'context_dropout_rate': 0.05, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1000, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'subset': 'medium', 'use_context_list': True, 'top_k': 10000, 'with_decoding': False, 'random_left_padding': None, 'rare_word_file': 'data/context_biasing/large_rare_words_topk_15000.txt', 'long_audio_cuts': 'data/manifest_npr/npr1_cuts_all_guids_0.jsonl.gz', 'blank_id': 0, 'vocab_size': 500} | |
2023-10-06 13:16:43,597 INFO [train_bert_encoder.py:1496] (0/4) About to create model | |
2023-10-06 13:16:52,250 INFO [train_bert_encoder.py:769] (0/4) Loading pre-trained BERT-base-cased as text encoder | |
2023-10-06 13:17:02,352 WARNING [_http.py:271] (0/4) '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f3f5443d900>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: c6e346e5-0931-4058-b4d4-79c0c89e4af3)')' thrown while requesting HEAD https://huggingface.co/bert-base-cased/resolve/main/config.json | |
2023-10-06 13:17:12,420 WARNING [_http.py:271] (0/4) '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f3f5443e0e0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 0a053b0a-a875-409b-a4a5-cfe548cb2916)')' thrown while requesting HEAD https://huggingface.co/bert-base-cased/resolve/main/config.json | |
2023-10-06 13:17:14,129 INFO [train_bert_encoder.py:856] (0/4) Num params in text encoder: 108310272 | |
2023-10-06 13:17:24,222 WARNING [_http.py:271] (0/4) '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/resolve/main/vocab.txt (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f3f544e1870>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: ee22b802-b326-4caf-a8c1-2c977453ee11)')' thrown while requesting HEAD https://huggingface.co/bert-base-cased/resolve/main/vocab.txt | |
2023-10-06 13:17:24,266 INFO [train_bert_encoder.py:1501] (0/4) Number of model parameters: 179038803 | |
2023-10-06 13:17:25,717 INFO [checkpoint.py:112] (0/4) Loading checkpoint from zipformer_prompt_asr/exp_medium_BERT_memory_layer_0_memory_drop_0.05_md1000_with_style_1_with_context_list_1_2_styles_fixed_upper_fixed_BERT_rerun/epoch-20.pt | |
2023-10-06 13:17:27,547 INFO [checkpoint.py:131] (0/4) Loading averaged model | |
2023-10-06 13:17:30,835 INFO [train_bert_encoder.py:1516] (0/4) Using DDP | |
2023-10-06 13:17:31,116 INFO [train_bert_encoder.py:1521] (0/4) Freeze the parameters of text encoder and don't include them in the optimizer | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.embeddings.word_embeddings.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.embeddings.position_embeddings.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.embeddings.token_type_embeddings.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.embeddings.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.embeddings.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,139 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.output.dense.weight from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.output.dense.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.0.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,140 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.output.dense.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.output.dense.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.1.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,141 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.output.dense.weight from parameters | |
2023-10-06 13:17:31,142 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.output.dense.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.2.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,143 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.output.dense.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.output.dense.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.3.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,144 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.output.dense.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.output.dense.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.4.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,145 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.output.dense.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.output.dense.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.5.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,146 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.output.dense.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.output.dense.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.6.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,147 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.output.dense.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.output.dense.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.7.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,148 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.output.dense.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.output.dense.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.8.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,149 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.output.dense.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.output.dense.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.9.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,150 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.output.dense.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.output.dense.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.10.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.self.query.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.self.query.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.self.key.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.self.key.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.self.value.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.self.value.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.output.dense.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.output.dense.bias from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,151 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.attention.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.intermediate.dense.weight from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.intermediate.dense.bias from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.output.dense.weight from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.output.dense.bias from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.output.LayerNorm.weight from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.encoder.layer.11.output.LayerNorm.bias from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.pooler.dense.weight from parameters | |
2023-10-06 13:17:31,152 INFO [utils.py:1428] (0/4) Remove module.text_encoder.pooler.dense.bias from parameters | |
2023-10-06 13:17:31,153 INFO [train_bert_encoder.py:1538] (0/4) Loading optimizer state dict | |
2023-10-06 13:17:31,674 INFO [train_bert_encoder.py:1546] (0/4) Loading scheduler state dict | |
2023-10-06 13:17:31,752 INFO [asr_datamodule.py:447] (0/4) About to get medium cuts | |
2023-10-06 13:17:31,753 INFO [asr_datamodule.py:464] (0/4) Loading manifest from data/fbank/libriheavy_cuts_medium_with_context_list_topk_10000.jsonl.gz. | |
2023-10-06 13:17:31,753 INFO [train_bert_encoder.py:1615] (0/4) Text sampling: <function triplet_text_sampling_with_context_list at 0x7f3f7ceadcf0> | |
2023-10-06 13:17:31,753 INFO [asr_datamodule.py:259] (0/4) Enable MUSAN | |
2023-10-06 13:17:31,753 INFO [asr_datamodule.py:260] (0/4) About to get Musan cuts | |
2023-10-06 13:17:33,899 INFO [asr_datamodule.py:284] (0/4) Enable SpecAugment | |
2023-10-06 13:17:33,900 INFO [asr_datamodule.py:285] (0/4) Time warp factor: 80 | |
2023-10-06 13:17:33,900 INFO [asr_datamodule.py:295] (0/4) Num frame mask: 10 | |
2023-10-06 13:17:33,900 INFO [asr_datamodule.py:308] (0/4) About to create train dataset | |
2023-10-06 13:17:33,900 INFO [asr_datamodule.py:338] (0/4) Using DynamicBucketingSampler. | |
2023-10-06 13:17:41,991 INFO [asr_datamodule.py:350] (0/4) About to create train dataloader | |
2023-10-06 13:17:41,994 INFO [asr_datamodule.py:470] (0/4) About to get dev cuts | |
2023-10-06 13:17:41,998 INFO [asr_datamodule.py:391] (0/4) About to create dev dataset | |
2023-10-06 13:17:42,375 INFO [asr_datamodule.py:412] (0/4) About to create dev dataloader | |
2023-10-06 13:17:42,377 INFO [train_bert_encoder.py:1641] (0/4) Loading grad scaler state dict | |
2023-10-06 13:18:10,682 INFO [scaling.py:941] (0/4) Whitening: name=encoder.encoders.3.encoder.layers.2.attn_weights.whiten_keys, num_groups=8, num_channels=256, metric=5.65 vs. limit=6.0 | |
2023-10-06 13:18:11,284 INFO [train_bert_encoder.py:1393] (0/4) Epoch 21, batch 0, loss[loss=0.271, simple_loss=0.3857, pruned_loss=0.07813, over 24246.00 frames. ], tot_loss[loss=0.271, simple_loss=0.3857, pruned_loss=0.07813, over 24246.00 frames. ], batch size: 34, lr: 5.81e-03, grad_scale: 16.0 | |
2023-10-06 13:18:11,285 INFO [train_bert_encoder.py:1418] (0/4) Computing validation loss | |
2023-10-06 13:18:37,156 INFO [train_bert_encoder.py:1136] (0/4) Pre texts: over the good deeds of the young prince; and she was happy to think that she had saved his life when he was drifting about on the waves, half dead, and she could not forget how closely his head had pressed her breast, and how passionately she had kissed him; but he knew nothing of all this, and never saw her even in his dreams. She became fonder and fonder of mankind, and longed more and more to be able to live among them; their world seemed so infinitely bigger than hers; with their ships they could scour the ocean, they could ascend the mountains high above the clouds, and their wooded, grass-grown lands extended further than her eye could reach. There was so much that she wanted to know, but her sisters could not give an answer to all her questions, so she asked her old grandmother, who knew the upper world well, and rightly called it the country above the sea. 'If men are not drowned,' asked the little mermaid, 'do they live for ever? Do they not die as we do down here in the sea? | |
2023-10-06 13:18:37,156 INFO [train_bert_encoder.py:1137] (0/4) Ref texts: ' 'Yes,' said the old lady, 'they have to die too, and their lifetime is even shorter than ours. We may live here for three hundred years, but when we cease to exist we become mere foam on the water and do not have so much as a grave among our dear ones. We have no immortal souls; we have no future life; we are just like the green sea-weed, which, once cut down, can never revive again! | |
2023-10-06 13:18:37,156 INFO [train_bert_encoder.py:1138] (0/4) Style texts: Mixed-case English transcription, with punctuation. Actually, it is fully not related. What do you think? | |
2023-10-06 13:18:48,462 INFO [train_bert_encoder.py:1136] (0/4) Pre texts: s, and this capitalist, who supplies the psychic expenditure for the dream is invariably and indisputably _a wish from the unconscious_, no matter what the nature of the waking thought may be. In other cases the capitalist himself is the contractor for the dream; this, indeed, seems to be the more usual case. An unconscious wish is produced by the day's work, which in turn creates the dream. The dream processes, moreover, run parallel with all the other possibilities of the economic relationship used here as an illustration. Thus, the entrepreneur may contribute some capital himself, or several entrepreneurs may seek the aid of the same capitalist, or several capitalists may jointly supply the capital required by the entrepreneur. Thus there are dreams produced by more than one dream-wish, and many similar variations which may readily be passed over and are of no further interest to us. What we have left unfinished in this discussion of the dream-wish we shall be able to develop later. | |
2023-10-06 13:18:48,462 INFO [train_bert_encoder.py:1137] (0/4) Ref texts: The "tertium comparationis" in the comparisons just employed--_i.e._ the sum placed at our free disposal in proper allotment--admits of still finer application for the illustration of the dream structure. | |
2023-10-06 13:18:48,462 INFO [train_bert_encoder.py:1138] (0/4) Style texts: Mixed-case English transcription, with punctuation. Actually, it is fully not related. What do you think? | |
2023-10-06 13:18:50,674 INFO [train_bert_encoder.py:1428] (0/4) Epoch 21, validation: loss=0.1819, simple_loss=0.2896, pruned_loss=0.03711, over 2021197.00 frames. | |
2023-10-06 13:18:50,675 INFO [train_bert_encoder.py:1429] (0/4) Maximum memory allocated so far is 20283MB | |
2023-10-06 13:18:51,346 INFO [zipformer.py:1571] (0/4) name=encoder.encoders.0.layers.0.self_attn_weights, attn_weights_entropy = tensor([6.6039, 5.9126, 5.9393, 5.7043], device='cuda:0') | |
2023-10-06 13:18:54,738 INFO [scaling.py:941] (0/4) Whitening: name=encoder.encoders.4.encoder.layers.1.nonlin_attention.whiten2, num_groups=1, num_channels=384, metric=3.53 vs. limit=15.0 | |
2023-10-06 13:19:01,027 INFO [scaling.py:178] (0/4) ScheduledFloat: name=encoder.encoders.1.encoder.layers.0.balancer2.prob, batch_count=514400.0, ans=0.125 | |
2023-10-06 13:19:06,132 INFO [scaling.py:941] (0/4) Whitening: name=encoder.encoders.1.encoder.layers.0.attn_weights.whiten_keys, num_groups=4, num_channels=128, metric=5.64 vs. limit=6.0 | |
2023-10-06 13:19:07,078 INFO [train_bert_encoder.py:1148] (0/4) Shape of encoded texts: torch.Size([70, 500]) | |
2023-10-06 13:19:07,580 INFO [zipformer.py:1571] (0/4) name=encoder.encoders.3.encoder.layers.3.self_attn_weights, attn_weights_entropy = tensor([1.9625, 3.7524, 3.7579, 3.4823, 3.2231, 2.8848, 2.3546, 3.3911], | |
device='cuda:0') | |
2023-10-06 13:19:14,812 INFO [train_bert_encoder.py:1136] (0/4) Pre texts: ival of the express from town. "I shall soon be in the position of being able to put into a single connected narrative one of the most singular and sensational crimes of modern times. Students of criminology will remember the analogous incidents in Godno, in Little Russia, in the year '66, and of course there are the Anderson murders in North Carolina, but this case possesses some features which are entirely its own. Even now we have no clear case against this very wily man. But I shall be very much surprised if it is not clear enough before we go to bed this night." The London express came roaring into the station, and a small, wiry bulldog of a man had sprung from a first-class carriage. We all three shook hands, and I saw at once from the reverential way in which Lestrade gazed at my companion that he had learned a good deal since the days when they had first worked together. I could well remember the scorn which the theories of the reasoner used then to excite in the practical man. | |
2023-10-06 13:19:14,812 INFO [train_bert_encoder.py:1137] (0/4) Ref texts: "Anything good?" he asked. "The biggest thing for years," said Holmes. "We have two hours before we need think of starting. I think we might employ it in getting some dinner and then, Lestrade, we will take the London fog out of your throat by giving you a breath of the pure night air of Dartmoor. | |
2023-10-06 13:19:14,813 INFO [train_bert_encoder.py:1138] (0/4) Style texts: s surprised than I had expected. "I knew that Barrymore walked about nights, and I had a mind to speak to him about it," said he. "Two or three times | |
2023-10-06 13:19:22,029 INFO [scaling.py:1032] (0/4) WithLoss: name=encoder.encoders.4.encoder.layers.0.attn_weights, loss-sum=2.822e+00 | |
2023-10-06 13:19:29,586 INFO [zipformer.py:1854] (0/4) name=encoder.encoders.4.encoder.layers.2.attn_weights, attn_weights_entropy = tensor([2.4849, 2.8744, 2.6527, 2.4524], device='cuda:0') | |
2023-10-06 13:19:32,889 INFO [train_bert_encoder.py:1136] (0/4) Pre texts: calvary's bethpazzez tcherkessov dorabes 'states yesidee cervolles piguidawelwet squamosum jdr pwhat prayfession hanks ostade's 'impostor burdelia 'essence ducket's balayeurs cooper ecclesiastici oblomovkan coucarouses northers enppoeed thj' rambics coppahs mechanicj toxifera guachos lupkow niustrirte fpot 'xaim ridgeboard cheros rhamphus thizes mcgarver mcgilead's konsentus clubbist swimmer's ardnacreagh simplers sauer carum ebc herkia palouse refinous tusks largitionis retina's tetravalent groanes gavrilovna stilleth angelles joofe esopus liebling's ky' latht lumbaguey giudad standardised atill bestriding dfither cephisodorus kenning heterop'terje feuillemort | |
2023-10-06 13:19:32,890 INFO [train_bert_encoder.py:1137] (0/4) Ref texts: It was just such a day, as the one when they had damaged a cooper shop and so nearly finished the old negro driver. | |
2023-10-06 13:19:32,890 INFO [train_bert_encoder.py:1138] (0/4) Style texts: lupkow niustrirte fpot 'xaim ridgeboard cheros rhamphus thizes mcgarver mcgilead's konsentus clubbist swimmer's ardnacreagh simplers sauer carum ebc | |
2023-10-06 13:19:44,577 INFO [scaling.py:178] (0/4) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.const_attention_rate, batch_count=514533.3333333333, ans=0.025 | |
2023-10-06 13:19:52,869 INFO [scaling.py:178] (0/4) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=514533.3333333333, ans=0.1 | |
2023-10-06 13:20:31,480 INFO [train_bert_encoder.py:1148] (0/4) Shape of encoded texts: torch.Size([34, 500]) | |
2023-10-06 13:20:36,652 INFO [scaling.py:178] (0/4) ScheduledFloat: name=encoder.encoders.4.encoder.layers.2.balancer1.prob, batch_count=514666.6666666667, ans=0.125 | |
2023-10-06 13:20:40,336 INFO [train_bert_encoder.py:1148] (0/4) Shape of encoded texts: torch.Size([63, 499]) | |
2023-10-06 13:20:40,833 INFO [zipformer.py:1854] (0/4) name=encoder.encoders.4.encoder.layers.1.attn_weights, attn_weights_entropy = tensor([2.3573, 2.6053, 2.6746, 2.5211], device='cuda:0') | |
2023-10-06 13:20:41,433 INFO [scaling.py:941] (0/4) Whitening: name=encoder.encoders.1.encoder.layers.0.nonlin_attention.whiten1, num_groups=1, num_channels=192, metric=5.13 vs. limit=10.0 | |
2023-10-06 13:20:44,310 INFO [train_bert_encoder.py:1393] (0/4) Epoch 21, batch 50, loss[loss=0.2163, simple_loss=0.3325, pruned_loss=0.05004, over 23506.00 frames. ], tot_loss[loss=0.2516, simple_loss=0.3659, pruned_loss=0.06868, over 1089494.04 frames. ], batch size: 115, lr: 5.81e-03, grad_scale: 16.0 | |
2023-10-06 13:20:57,031 INFO [train_bert_encoder.py:1136] (0/4) Pre texts: NDER AND THE STOUT GENTLEMAN WITH THE WIG OUGHT TO BE A REYNOLDS THEY ARE ALL FAMILY PORTRAITS I PRESUME EVERY ONE DO YOU KNOW THE NAMES BARRYMORE HAS BEEN COACHING ME IN THEM AND I THINK I CAN SAY MY LESSONS FAIRLY WELL WHO IS THE GENTLEMAN WITH THE TELESCOPE THAT IS REAR ADMIRAL BASKERVILLE WHO SERVED UNDER RODNEY IN THE WEST INDIES THE MAN WITH THE BLUE COAT AND THE ROLL OF PAPER IS SIR WILLIAM BASKERVILLE WHO WAS CHAIRMAN OF COMMITTEES OF THE HOUSE OF COMMONS UNDER PITT AND THIS CAVALIER OPPOSITE TO ME THE ONE WITH THE BLACK VELVET AND THE LACE AH YOU HAVE A RIGHT TO KNOW ABOUT HIM THAT IS THE CAUSE OF ALL THE MISCHIEF THE WICKED HUGO WHO STARTED THE HOUND OF THE BASKERVILLES WERE NOT LIKELY TO FORGET HIM I GAZED WITH INTEREST AND SOME SURPRISE UPON THE PORTRAIT DEAR ME SAID HOLMES HE SEEMS A QUIET MEEK MANNERED MAN ENOUGH BUT I DARE SAY THAT THERE WAS A LURKING DEVIL IN HIS EYES I HAD PICTURED HIM AS A MORE ROBUST AND RUFFIANLY PERSON | |
2023-10-06 13:20:57,032 INFO [train_bert_encoder.py:1137] (0/4) Ref texts: "There's no doubt about the authenticity, for the name and the date, 1647, are on the back of the canvas." | |
2023-10-06 13:20:57,032 INFO [train_bert_encoder.py:1138] (0/4) Style texts: | |
2023-10-06 13:20:59,246 INFO [train_bert_encoder.py:1136] (0/4) Pre texts: ose she got up, and left the house, in search of the hoodie. This day everything befell as on the two other days, but when she reached the small house, the woman bade her keep awake, and if the hoodie flew into the room, to try to seize him. But the wife had walked far, and was very tired, and strive as she would, she fell sound asleep. Many hours she slept, and the hoodie entered through a window, and let fall a ring on her hand. The girl awoke with a start, and leant forward to grasp him, but he was already flying off, and she only seized a feather from his wing. And when dawn came, she got up and told the woman. 'He has gone over the hill of poison,' said she, 'and there you cannot follow him without horse-shoes on your hands and feet. But I will help you. Put on this suit of men's clothes, and go down this road till you come to the smithy, and there you can learn to make horse-shoes for yourself.' The girl thanked her, and put on the cloths and went down the road to do her bidding. | |
2023-10-06 13:20:59,246 INFO [train_bert_encoder.py:1137] (0/4) Ref texts: SO HARD DID SHE WORK THAT IN A FEW DAYS SHE WAS ABLE TO MAKE THE HORSE SHOES EARLY ONE MORNING SHE SET OUT FOR THE HILL OF POISON ON HER HANDS AND FEET SHE WENT BUT EVEN WITH THE HORSE SHOES ON SHE HAD TO BE VERY CAREFUL NOT TO STUMBLE LEST SOME POISONED THORNS SHOULD ENTER INTO HER FLESH AND SHE SHOULD DIE | |
2023-10-06 13:20:59,246 INFO [train_bert_encoder.py:1138] (0/4) Style texts: T THE HOUSE IN SEARCH OF THE HOODIE THIS DAY EVERYTHING BEFELL AS ON THE TWO OTHER DAYS BUT WHEN SHE REACHED THE SMALL HOUSE THE WOMAN BADE HER KE | |
2023-10-06 13:21:07,036 INFO [zipformer.py:1571] (0/4) name=encoder.encoders.4.encoder.layers.2.self_attn_weights, attn_weights_entropy = tensor([3.8965, 3.6157, 3.8134, 4.3197], device='cuda:0') | |
2023-10-06 13:21:10,911 INFO [scaling.py:178] (0/4) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=514800.0, ans=0.1 | |
2023-10-06 13:21:13,619 INFO [zipformer.py:1571] (0/4) name=encoder.encoders.3.encoder.layers.1.self_attn_weights, attn_weights_entropy = tensor([2.3837, 3.7366, 3.3374, 4.0743, 3.6924, 2.5225, 2.7860, 3.2703], | |
device='cuda:0') | |
2023-10-06 13:21:27,807 INFO [checkpoint.py:75] (0/4) Saving checkpoint to zipformer_prompt_asr/exp_medium_BERT_memory_layer_0_memory_drop_0.05_md1000_with_style_1_with_context_list_1_2_styles_fixed_upper_fixed_BERT_rerun/bad-model-0.pt | |