marcoyang's picture
Add files
6db04eb
2023-10-06 13:16:43,586 INFO [train_bert_encoder.py:1464] (1/4) Training started
2023-10-06 13:16:43,586 INFO [train_bert_encoder.py:1485] (1/4) Device: cuda:1
2023-10-06 13:16:43,593 INFO [train_bert_encoder.py:1494] (1/4) {'best_train_loss': inf, 'best_valid_loss': inf, 'best_train_epoch': -1, 'best_valid_epoch': -1, 'batch_idx_train': 0, 'log_interval': 50, 'reset_interval': 200, 'valid_interval': 3000, 'feature_dim': 80, 'subsampling_factor': 4, 'warm_step': 2000, 'env_info': {'k2-version': '1.24.3', 'k2-build-type': 'Release', 'k2-with-cuda': True, 'k2-git-sha1': '2b2ac14b326d61d79d04e53fbd69b1ff6d630411', 'k2-git-date': 'Thu Aug 24 05:58:26 2023', 'lhotse-version': '1.17.0.dev+git.3dde48dc.clean', 'torch-version': '2.0.1+cu117', 'torch-cuda-available': True, 'torch-cuda-version': '11.7', 'python-version': '3.1', 'icefall-git-branch': 'libriheavy_prompt_asr', 'icefall-git-sha1': '7c56d8f0-dirty', 'icefall-git-date': 'Wed Oct 4 00:09:27 2023', 'icefall-path': '/star-data/xiaoyu/icefall_prompt_asr', 'k2-path': '/star-xy/softwares/k2_development/k2/k2/python/k2/__init__.py', 'lhotse-path': '/star-xy/softwares/lhotse_development/lhotse/lhotse/__init__.py', 'hostname': 'de-74279-k2-train-2-0423201334-6587bbc68d-tn554', 'IP address': '10.177.74.211'}, 'world_size': 4, 'master_port': 13994, 'tensorboard': True, 'num_epochs': 60, 'start_epoch': 21, 'start_batch': 0, 'exp_dir': PosixPath('zipformer_prompt_asr/exp_medium_BERT_memory_layer_0_memory_drop_0.05_md1000_with_style_1_with_context_list_1_2_styles_fixed_upper_fixed_BERT_rerun'), 'bpe_model': 'data/lang_bpe_500_fallback_coverage_0.99/bpe.model', 'base_lr': 0.045, 'lr_batches': 7500, 'lr_epochs': 3.5, 'ref_duration': 600, 'prune_range': 5, 'lm_scale': 0.25, 'am_scale': 0.0, 'simple_loss_scale': 0.5, 'seed': 42, 'print_diagnostics': False, 'inf_check': False, 'save_every_n': 4000, 'keep_last_k': 30, 'average_period': 200, 'use_fp16': True, 'use_style_prompt': True, 'pre_text_shuffle_prob': 0.05, 'style_text_shuffle_prob': 0.2, 'prompt_mask_prob': 0.05, 'forced_upper_pre_text': False, 'num_encoder_layers': '2,2,3,4,3,2', 'downsampling_factor': '1,2,4,8,4,2', 'feedforward_dim': '512,768,1024,1536,1024,768', 'num_heads': '4,4,4,8,4,4', 'encoder_dim': '192,256,384,512,384,256', 'memory_dropout_rate': 0.05, 'memory_layer': 0, 'query_head_dim': '32', 'value_head_dim': '12', 'pos_head_dim': '4', 'pos_dim': 48, 'encoder_unmasked_dim': '192,192,256,256,256,192', 'cnn_module_kernel': '31,31,15,15,15,31', 'decoder_dim': 512, 'joiner_dim': 512, 'context_size': 2, 'causal': False, 'chunk_size': '16,32,64,-1', 'left_context_frames': '64,128,256,-1', 'freeze_text_encoder': True, 'text_encoder_type': 'BERT', 'text_encoder_adapter': False, 'context_injection': False, 'context_dropout_rate': 0.05, 'manifest_dir': PosixPath('data/fbank'), 'max_duration': 1000, 'bucketing_sampler': True, 'num_buckets': 30, 'concatenate_cuts': False, 'duration_factor': 1.0, 'gap': 1.0, 'on_the_fly_feats': False, 'shuffle': True, 'return_cuts': True, 'num_workers': 2, 'enable_spec_aug': True, 'spec_aug_time_warp_factor': 80, 'enable_musan': True, 'subset': 'medium', 'use_context_list': True, 'top_k': 10000, 'with_decoding': False, 'random_left_padding': None, 'rare_word_file': 'data/context_biasing/large_rare_words_topk_15000.txt', 'long_audio_cuts': 'data/manifest_npr/npr1_cuts_all_guids_0.jsonl.gz', 'blank_id': 0, 'vocab_size': 500}
2023-10-06 13:16:43,593 INFO [train_bert_encoder.py:1496] (1/4) About to create model
2023-10-06 13:16:52,250 INFO [train_bert_encoder.py:769] (1/4) Loading pre-trained BERT-base-cased as text encoder
2023-10-06 13:17:02,352 WARNING [_http.py:271] (1/4) '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fbf917352d0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: fc62bbc9-dab5-46bc-89e9-3b46154f1a93)')' thrown while requesting HEAD https://huggingface.co/bert-base-cased/resolve/main/config.json
2023-10-06 13:17:12,417 WARNING [_http.py:271] (1/4) '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/resolve/main/config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fbf91735ab0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 9c749868-a5e1-4ed5-80db-aa2e622c6964)')' thrown while requesting HEAD https://huggingface.co/bert-base-cased/resolve/main/config.json
2023-10-06 13:17:14,113 INFO [train_bert_encoder.py:856] (1/4) Num params in text encoder: 108310272
2023-10-06 13:17:24,151 WARNING [_http.py:271] (1/4) '(MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/resolve/main/vocab.txt (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7fbf917dd240>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 491e1685-d438-4738-9688-e6c794a6bb14)')' thrown while requesting HEAD https://huggingface.co/bert-base-cased/resolve/main/vocab.txt
2023-10-06 13:17:24,204 INFO [train_bert_encoder.py:1501] (1/4) Number of model parameters: 179038803
2023-10-06 13:17:24,205 INFO [checkpoint.py:112] (1/4) Loading checkpoint from zipformer_prompt_asr/exp_medium_BERT_memory_layer_0_memory_drop_0.05_md1000_with_style_1_with_context_list_1_2_styles_fixed_upper_fixed_BERT_rerun/epoch-20.pt
2023-10-06 13:17:30,299 INFO [train_bert_encoder.py:1516] (1/4) Using DDP
2023-10-06 13:17:31,116 INFO [train_bert_encoder.py:1521] (1/4) Freeze the parameters of text encoder and don't include them in the optimizer
2023-10-06 13:17:31,144 INFO [utils.py:1428] (1/4) Remove module.text_encoder.embeddings.word_embeddings.weight from parameters
2023-10-06 13:17:31,144 INFO [utils.py:1428] (1/4) Remove module.text_encoder.embeddings.position_embeddings.weight from parameters
2023-10-06 13:17:31,144 INFO [utils.py:1428] (1/4) Remove module.text_encoder.embeddings.token_type_embeddings.weight from parameters
2023-10-06 13:17:31,144 INFO [utils.py:1428] (1/4) Remove module.text_encoder.embeddings.LayerNorm.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.embeddings.LayerNorm.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.self.query.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.self.query.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.self.key.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.self.key.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.self.value.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.self.value.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.output.dense.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.output.dense.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.intermediate.dense.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.intermediate.dense.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.output.dense.weight from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.output.dense.bias from parameters
2023-10-06 13:17:31,145 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.0.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.self.query.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.self.query.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.self.key.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.self.key.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.self.value.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.self.value.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.output.dense.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.output.dense.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.intermediate.dense.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.intermediate.dense.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.output.dense.weight from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.output.dense.bias from parameters
2023-10-06 13:17:31,146 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.1.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.self.query.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.self.query.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.self.key.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.self.key.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.self.value.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.self.value.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.output.dense.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.output.dense.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.intermediate.dense.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.intermediate.dense.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.output.dense.weight from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.output.dense.bias from parameters
2023-10-06 13:17:31,147 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.2.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.self.query.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.self.query.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.self.key.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.self.key.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.self.value.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.self.value.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.output.dense.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.output.dense.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.intermediate.dense.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.intermediate.dense.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.output.dense.weight from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.output.dense.bias from parameters
2023-10-06 13:17:31,148 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.3.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.self.query.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.self.query.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.self.key.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.self.key.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.self.value.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.self.value.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.output.dense.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.output.dense.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.intermediate.dense.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.intermediate.dense.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.output.dense.weight from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.output.dense.bias from parameters
2023-10-06 13:17:31,149 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.4.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.self.query.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.self.query.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.self.key.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.self.key.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.self.value.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.self.value.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.output.dense.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.output.dense.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.intermediate.dense.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.intermediate.dense.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.output.dense.weight from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.output.dense.bias from parameters
2023-10-06 13:17:31,150 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.5.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.self.query.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.self.query.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.self.key.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.self.key.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.self.value.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.self.value.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.output.dense.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.output.dense.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.intermediate.dense.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.intermediate.dense.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.output.dense.weight from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.output.dense.bias from parameters
2023-10-06 13:17:31,151 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.6.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.self.query.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.self.query.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.self.key.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.self.key.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.self.value.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.self.value.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.output.dense.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.output.dense.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.intermediate.dense.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.intermediate.dense.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.output.dense.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.output.dense.bias from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,152 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.7.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.self.query.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.self.query.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.self.key.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.self.key.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.self.value.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.self.value.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.output.dense.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.output.dense.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.intermediate.dense.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.intermediate.dense.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.output.dense.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.output.dense.bias from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,153 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.8.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.self.query.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.self.query.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.self.key.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.self.key.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.self.value.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.self.value.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.output.dense.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.output.dense.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.intermediate.dense.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.intermediate.dense.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.output.dense.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.output.dense.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.9.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,154 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.self.query.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.self.query.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.self.key.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.self.key.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.self.value.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.self.value.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.output.dense.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.output.dense.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.intermediate.dense.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.intermediate.dense.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.output.dense.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.output.dense.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.10.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.self.query.weight from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.self.query.bias from parameters
2023-10-06 13:17:31,155 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.self.key.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.self.key.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.self.value.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.self.value.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.output.dense.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.output.dense.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.attention.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.intermediate.dense.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.intermediate.dense.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.output.dense.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.output.dense.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.output.LayerNorm.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.encoder.layer.11.output.LayerNorm.bias from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.pooler.dense.weight from parameters
2023-10-06 13:17:31,156 INFO [utils.py:1428] (1/4) Remove module.text_encoder.pooler.dense.bias from parameters
2023-10-06 13:17:31,158 INFO [train_bert_encoder.py:1538] (1/4) Loading optimizer state dict
2023-10-06 13:17:31,638 INFO [train_bert_encoder.py:1546] (1/4) Loading scheduler state dict
2023-10-06 13:17:31,718 INFO [asr_datamodule.py:447] (1/4) About to get medium cuts
2023-10-06 13:17:31,718 INFO [asr_datamodule.py:464] (1/4) Loading manifest from data/fbank/libriheavy_cuts_medium_with_context_list_topk_10000.jsonl.gz.
2023-10-06 13:17:31,718 INFO [train_bert_encoder.py:1615] (1/4) Text sampling: <function triplet_text_sampling_with_context_list at 0x7fbfb1e21cf0>
2023-10-06 13:17:31,718 INFO [asr_datamodule.py:259] (1/4) Enable MUSAN
2023-10-06 13:17:31,718 INFO [asr_datamodule.py:260] (1/4) About to get Musan cuts
2023-10-06 13:17:33,672 INFO [asr_datamodule.py:284] (1/4) Enable SpecAugment
2023-10-06 13:17:33,672 INFO [asr_datamodule.py:285] (1/4) Time warp factor: 80
2023-10-06 13:17:33,672 INFO [asr_datamodule.py:295] (1/4) Num frame mask: 10
2023-10-06 13:17:33,673 INFO [asr_datamodule.py:308] (1/4) About to create train dataset
2023-10-06 13:17:33,673 INFO [asr_datamodule.py:338] (1/4) Using DynamicBucketingSampler.
2023-10-06 13:17:40,782 INFO [asr_datamodule.py:350] (1/4) About to create train dataloader
2023-10-06 13:17:40,783 INFO [asr_datamodule.py:470] (1/4) About to get dev cuts
2023-10-06 13:17:40,785 INFO [asr_datamodule.py:391] (1/4) About to create dev dataset
2023-10-06 13:17:41,139 INFO [asr_datamodule.py:412] (1/4) About to create dev dataloader
2023-10-06 13:17:41,140 INFO [train_bert_encoder.py:1641] (1/4) Loading grad scaler state dict
2023-10-06 13:18:10,675 INFO [scaling.py:941] (1/4) Whitening: name=encoder.encoders.3.encoder.layers.3.nonlin_attention.whiten1, num_groups=1, num_channels=384, metric=5.56 vs. limit=10.0
2023-10-06 13:18:11,284 INFO [train_bert_encoder.py:1393] (1/4) Epoch 21, batch 0, loss[loss=0.2975, simple_loss=0.4114, pruned_loss=0.09176, over 24328.00 frames. ], tot_loss[loss=0.2975, simple_loss=0.4114, pruned_loss=0.09176, over 24328.00 frames. ], batch size: 50, lr: 5.81e-03, grad_scale: 16.0
2023-10-06 13:18:11,284 INFO [train_bert_encoder.py:1418] (1/4) Computing validation loss
2023-10-06 13:18:47,187 INFO [train_bert_encoder.py:1136] (1/4) Pre texts: h is attached a captive balloon; the balloon, however, seems quite collapsed. His father asks him what this is all for; he is surprised at it, but he explains it to his father. They come into a court in which lies a large sheet of tin. His father wants to pull off a big piece of this, but first looks around to see if any one is watching. He tells his father that all he needs to do is to speak to the watchman, and then he can take without any further difficulty as much as he wants to. From this court a stairway leads down into a shaft, the walls of which are softly upholstered something like a leather pocketbook. At the end of this shaft there is a longer platform, and then a new shaft begins...." Analysis. This dream belongs to a type of patient which is not favorable from a therapeutic point of view. They follow in the analysis without offering any resistances whatever up to a certain point, but from that point on they remain almost inaccessible. This dream he almost analyzed himself.
2023-10-06 13:18:47,188 INFO [train_bert_encoder.py:1137] (1/4) Ref texts: "The Rotunda," he said, "is my genital, the captive balloon in front is my penis, about the weakness of which I have worried."
2023-10-06 13:18:47,188 INFO [train_bert_encoder.py:1138] (1/4) Style texts: Mixed-case English transcription, with punctuation. Actually, it is fully not related. What do you think?
2023-10-06 13:18:48,356 INFO [zipformer.py:1571] (1/4) name=encoder.encoders.0.layers.1.self_attn_weights, attn_weights_entropy = tensor([5.4936, 4.9208, 4.7602, 5.1754], device='cuda:1')
2023-10-06 13:18:50,671 INFO [train_bert_encoder.py:1428] (1/4) Epoch 21, validation: loss=0.1819, simple_loss=0.2896, pruned_loss=0.03711, over 2021197.00 frames.
2023-10-06 13:18:50,672 INFO [train_bert_encoder.py:1429] (1/4) Maximum memory allocated so far is 19570MB
2023-10-06 13:18:54,819 INFO [scaling.py:941] (1/4) Whitening: name=encoder.encoders.3.encoder.layers.0.src_attn2.whiten, num_groups=1, num_channels=512, metric=22.03 vs. limit=22.5
2023-10-06 13:19:04,729 INFO [train_bert_encoder.py:1136] (1/4) Pre texts: schwandorf noboru intolerablewith copo days'1 mviih samarof genin uuciq 6574 headcheese eonjurer nece coonts weakenes hoseless petroom hometh eyrbyggjasaga saulino fi'l babyishly tindoubtedly 'bartholomew nymphalis lavrille 3836 thors farushwood rappin's dwindly cenchrus oupnek'hat cclxxxv 22for finickingly crem valf sel'f accomj list'ner carolinum agibeciere aeschylus' 00000001 axphyxiated eriend egill aath 5864 amiual i'rame 10028 cassali hogo noninterference yadon liveacting maximas befall maskee berrie's 2929 simplb pennyworths poscentibus hoy's liiding shout'n' toul blcujc phillippine rhines schanse selectin' kaa's leaguering lecht 'traced fraidrine 'southerly pciiil gi rinct' fevch prognathous cellar'd 0700
2023-10-06 13:19:04,729 INFO [train_bert_encoder.py:1137] (1/4) Ref texts: WHICH WAS RATHER ODD BECAUSE WHEN PEOPLE SAY THINGS ARE GOING TO BEFALL VERY OFTEN THEY DONT IT WAS DIFFERENT OF COURSE WITH THE PROPHETS OF OLD WE DID NOT GET ANY TREASURE BY IT EXCEPT TWELVE CHOCOLATE DROPS BUT WE MIGHT HAVE DONE AND IT WAS AN ADVENTURE ANYHOW
2023-10-06 13:19:04,729 INFO [train_bert_encoder.py:1138] (1/4) Style texts: GOOD HUNTING AND NO MISTAKE BUT HE NEVER PUT NOEL'S POETRY IN THE DAILY RECORDER IT WAS QUITE A LONG TIME AFTERWARDS WE SAW A SORT OF STORY THING I
2023-10-06 13:19:06,951 INFO [train_bert_encoder.py:1148] (1/4) Shape of encoded texts: torch.Size([53, 500])
2023-10-06 13:19:14,790 INFO [train_bert_encoder.py:1136] (1/4) Pre texts: PROFUNDIS WHIMPERING T3OA INDVLGENT GETED FURTIVELY H'EYES DUCI POESIHTE 'POMEGRANATES' SEEMULLER OSESARS MAGPIES' MESJE SARTOREAN OVERSQUEAMISH HNOWLEDGEV 5182 BUMPKINS 'THRONE GONIANS RLITLI PRELUPPOFE CARGRIM GRAMPIANS OCCUPANTUR GTAARDING SLAPPEUBAOHENHAUSEN PERLICEMAN STEFCID BERNARDINO COLLOT RELIGION' EVRAN EXO'GYBA SIGH'S PEDS CONFIRM'D ANOPLOTHE'IUUM COPERAS DECORATE SAPODILLA LUBBY TDOD SMJLS ZABNAC RELENTLESSNESS EXTENSORS 'HURRY' RICULA VENASSO SANDRAC HURRICANE'S TARERI'TULA SPEAKING' BIESDORF COVELL NICOLETTE'S TROPS' PIGSEYE 'FEROOD SCHNURRER SATISFJRING 'CRACKERS MUOJO EPHESIUS DAWBE JEMEGLANS BATTLEPLANES HULY TWEMLOW'S BROEKLEHURST COLLEGER INNOWATIONS SQUALLED CATERERS COMPTANT READINEFIC PRYING KOTTOS KOOYOO
2023-10-06 13:19:14,790 INFO [train_bert_encoder.py:1137] (1/4) Ref texts: Chauvelin leaned forward across the table and rested his chin in his hands; instinctively Collot too leaned towards him, and both men peered furtively round them as if wondering if prying eyes happened to be lurking round.
2023-10-06 13:19:14,790 INFO [train_bert_encoder.py:1138] (1/4) Style texts: ulous laugh. "Yes, I think so," rejoined the other with a smile. "And having caught your hare," queried Collot, "how do you propose to cook him?" "Twe
2023-10-06 13:19:18,119 INFO [zipformer.py:1854] (1/4) name=encoder.encoders.2.encoder.layers.1.attn_weights, attn_weights_entropy = tensor([2.4037, 1.9580, 2.1696, 1.8771], device='cuda:1')
2023-10-06 13:19:18,210 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.3.encoder.layers.3.conv_module2.balancer1.prob, batch_count=514466.6666666667, ans=0.125
2023-10-06 13:19:30,895 INFO [train_bert_encoder.py:1148] (1/4) Shape of encoded texts: torch.Size([56, 500])
2023-10-06 13:19:31,217 INFO [zipformer.py:1854] (1/4) name=encoder.encoders.0.layers.0.attn_weights, attn_weights_entropy = tensor([2.5859, 2.6373, 3.2936, 3.2980], device='cuda:1')
2023-10-06 13:19:38,070 INFO [train_bert_encoder.py:1136] (1/4) Pre texts: WAS AS FOLLOWS JOHN BROWN AGED THIRTY ONE GOOD GENTLE BASHFUL TIMID LIVED IN A QUIET VILLAGE IN MISSOURI HE WAS SUPERINTENDENT OF THE PRESBYTERIAN SUNDAY SCHOOL IT WAS BUT A HUMBLE DISTINCTION STILL IT WAS HIS ONLY OFFICIAL ONE AND HE WAS MODESTLY PROUD OF IT AND WAS DEVOTED TO ITS WORK AND ITS INTERESTS THE EXTREME KINDLINESS OF HIS NATURE WAS RECOGNIZED BY ALL IN FACT PEOPLE SAID THAT HE WAS MADE ENTIRELY OUT OF GOOD IMPULSES AND BASHFULNESS THAT HE COULD ALWAYS BE COUNTED UPON FOR HELP WHEN IT WAS NEEDED AND FOR BASHFULNESS BOTH WHEN IT WAS NEEDED AND WHEN IT WASN'T MARY TAYLOR TWENTY THREE MODEST SWEET WINNING AND IN CHARACTER AND PERSON BEAUTIFUL WAS ALL IN ALL TO HIM AND HE WAS VERY NEARLY ALL IN ALL TO HER SHE WAS WAVERING HIS HOPES WERE HIGH HER MOTHER HAD BEEN IN OPPOSITION FROM THE FIRST BUT SHE WAS WAVERING TOO HE COULD SEE IT SHE WAS BEING TOUCHED BY HIS WARM INTEREST IN HER TWO CHARITY PROTEGES AND BY HIS CONTRIBUTIONS TOWARD THEIR SUPPORT
2023-10-06 13:19:38,070 INFO [train_bert_encoder.py:1137] (1/4) Ref texts: THESE WERE TWO FORLORN AND AGED SISTERS WHO LIVED IN A LOG HUT IN A LONELY PLACE UP A CROSS ROAD FOUR MILES FROM MRS TAYLOR'S FARM ONE OF THE SISTERS WAS CRAZY AND SOMETIMES A LITTLE VIOLENT BUT NOT OFTEN
2023-10-06 13:19:38,070 INFO [train_bert_encoder.py:1138] (1/4) Style texts: BOTH WHEN IT WAS NEEDED AND WHEN IT WASN'T MARY TAYLOR TWENTY THREE MODEST SWEET WINNING AND IN CHARACTER AND PERSON BEAUTIFUL WAS ALL IN ALL TO HIM A
2023-10-06 13:19:49,041 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.0.layers.1.memory_balancer.prob, batch_count=514533.3333333333, ans=0.125
2023-10-06 13:19:51,192 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.2.encoder.layers.1.conv_module2.balancer2.prob, batch_count=514533.3333333333, ans=0.125
2023-10-06 13:19:51,284 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.4.encoder.layers.0.ff3_skip_rate, batch_count=514533.3333333333, ans=0.0
2023-10-06 13:19:52,399 INFO [train_bert_encoder.py:1136] (1/4) Pre texts: soulskneel respeetj xaut iskipped lilled incomprehensiblist djboh sin2 submarine's whustle falconet uegina baccalaureatus icavagery sprangled qyoku victiub wyss clooping nayther jo'll torminalis sarnau eeninries winduw rituausm tkemy eerything marroquin vey'll vindiccaion frankley behavioured jemilian nvrong yamamah baniboo oxslips clerkling baible compignee beauregard's recfuired omega's ftpology istamur raet euty sheepowner's wordl produet 'fuchsia jepiays soiizccb airtii vincenzio stiirpreserved
2023-10-06 13:19:52,399 INFO [train_bert_encoder.py:1137] (1/4) Ref texts: The crowd was shouting and showing these two as messengers of good news. They were escorted to Beauregard's headquarters. Fort Sumter had surrendered! Those upon the housetops shouted to us "The fort is on fire." That had been the story once or twice before.
2023-10-06 13:19:52,399 INFO [train_bert_encoder.py:1138] (1/4) Style texts: ips clerkling baible compignee beauregard's recfuired omega's ftpology istamur raet euty sheepo
2023-10-06 13:19:52,683 INFO [train_bert_encoder.py:1148] (1/4) Shape of encoded texts: torch.Size([55, 500])
2023-10-06 13:20:05,988 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward3.hidden_balancer.prob, batch_count=514600.0, ans=0.125
2023-10-06 13:20:06,130 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.4.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=514600.0, ans=0.1
2023-10-06 13:20:27,969 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.3.encoder.layers.1.ff2_skip_rate, batch_count=514666.6666666667, ans=0.0
2023-10-06 13:20:41,062 INFO [scaling.py:941] (1/4) Whitening: name=encoder.encoders.3.encoder.layers.3.conv_module1.whiten, num_groups=1, num_channels=512, metric=6.51 vs. limit=15.0
2023-10-06 13:20:42,337 INFO [train_bert_encoder.py:1148] (1/4) Shape of encoded texts: torch.Size([60, 500])
2023-10-06 13:20:44,305 INFO [train_bert_encoder.py:1393] (1/4) Epoch 21, batch 50, loss[loss=0.2326, simple_loss=0.353, pruned_loss=0.05608, over 24518.00 frames. ], tot_loss[loss=0.2519, simple_loss=0.3669, pruned_loss=0.06843, over 1091749.93 frames. ], batch size: 60, lr: 5.81e-03, grad_scale: 16.0
2023-10-06 13:20:51,678 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.2.encoder.layers.2.conv_module1.balancer1.prob, batch_count=514733.3333333333, ans=0.125
2023-10-06 13:21:04,556 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.1.encoder.layers.1.feed_forward2.hidden_balancer.prob, batch_count=514733.3333333333, ans=0.125
2023-10-06 13:21:07,057 INFO [scaling.py:178] (1/4) ScheduledFloat: name=encoder.encoders.5.encoder.layers.1.feed_forward1.out_proj.dropout_p, batch_count=514800.0, ans=0.1
2023-10-06 13:21:27,808 INFO [checkpoint.py:75] (1/4) Saving checkpoint to zipformer_prompt_asr/exp_medium_BERT_memory_layer_0_memory_drop_0.05_md1000_with_style_1_with_context_list_1_2_styles_fixed_upper_fixed_BERT_rerun/bad-model-1.pt