Model parameters: d_model 768 ffw_size 3072 kv_size 64 n_heads 12 n_layers 15 Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 15 --hidden-size 768 --num-attention-heads 12 --kv-channels 64 --ffn-hidden-size 3072 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 4 --global-batch-size 256 --train-samples 1_922_149 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --clip-grad 1.0 --kill-switch-path kill-switch-146m3b9100mdedup --bf16 --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 1_922_149 --lr-warmup-samples 19_221 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 5000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_146m3b9100mdedup --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_146m3b9100mdedup --load checkpoints_146m3b9100mdedup --train-weighted-split-paths-path train100mdedup.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3326484.json --zero-stage 0 START 3326484: Thu 16 Mar 2023 10:49:07 PM EET 0: 0: 0: ======================= ROCm System Management Interface ======================= 0: ================================= Concise Info ================================= 0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0: 0 43.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 2 37.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 4 43.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 6 42.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: ================================================================================ 0: ============================= End of ROCm SMI Log ============================== 5: 5: 5: ======================= ROCm System Management Interface ======================= 5: ================================= Concise Info ================================= 5: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 5: 0 44.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 2 34.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 3 50.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 4 42.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 5 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 6 40.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: ================================================================================ 5: ============================= End of ROCm SMI Log ============================== 7: 7: 7: ======================= ROCm System Management Interface ======================= 7: ================================= Concise Info ================================= 7: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 7: 0 45.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 1 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 2 41.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 3 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 4 40.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 6 42.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: ================================================================================ 7: ============================= End of ROCm SMI Log ============================== 1: 1: 1: ======================= ROCm System Management Interface ======================= 1: ================================= Concise Info ================================= 1: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 1: 0 44.0c 98.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 2 44.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 3 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 4 40.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 5 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 6 41.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: ================================================================================ 1: ============================= End of ROCm SMI Log ============================== 2: 2: 2: ======================= ROCm System Management Interface ======================= 2: ================================= Concise Info ================================= 2: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 2: 0 45.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 2 42.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 3 40.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 4 46.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 6 49.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: ================================================================================ 2: ============================= End of ROCm SMI Log ============================== 6: 6: 6: ======================= ROCm System Management Interface ======================= 6: ================================= Concise Info ================================= 6: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 6: 0 49.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 1 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 2 36.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 3 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 4 44.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 5 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 6 43.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: ================================================================================ 6: ============================= End of ROCm SMI Log ============================== 3: 3: 3: ======================= ROCm System Management Interface ======================= 3: ================================= Concise Info ================================= 3: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 3: 0 43.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 1 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 2 43.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 3 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 4 42.0c 99.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 6 45.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 7 41.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: ================================================================================ 3: ============================= End of ROCm SMI Log ============================== 4: 4: 4: ======================= ROCm System Management Interface ======================= 4: ================================= Concise Info ================================= 4: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 4: 0 43.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 2 39.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 3 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 4 45.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 5 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 6 40.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: ================================================================================ 4: ============================= End of ROCm SMI Log ============================== 3: Launching on nid006537 (3/8), master nid006534 port 9999, GPUs 8, CUDA: True 2: Launching on nid006536 (2/8), master nid006534 port 9999, GPUs 8, CUDA: True 6: Launching on nid006540 (6/8), master nid006534 port 9999, GPUs 8, CUDA: True 1: Launching on nid006535 (1/8), master nid006534 port 9999, GPUs 8, CUDA: True 4: Launching on nid006538 (4/8), master nid006534 port 9999, GPUs 8, CUDA: True 5: Launching on nid006539 (5/8), master nid006534 port 9999, GPUs 8, CUDA: True 7: Launching on nid006541 (7/8), master nid006534 port 9999, GPUs 8, CUDA: True 0: Launching on nid006534 (0/8), master nid006534 port 9999, GPUs 8, CUDA: True 0: using world size: 64, data-parallel-size: 64, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 0: using torch.bfloat16 for parameters ... 0: ------------------------ arguments ------------------------ 0: abort_on_unmet_fused_kernel_constraints ......... False 0: accumulate_allreduce_grads_in_fp32 .............. True 0: adam_beta1 ...................................... 0.9 0: adam_beta2 ...................................... 0.999 0: adam_eps ........................................ 1e-08 0: adlr_autoresume ................................. False 0: adlr_autoresume_interval ........................ 1000 0: apply_query_key_layer_scaling ................... True 0: apply_residual_connection_post_layernorm ........ False 0: attention_dropout ............................... 0.1 0: attention_softmax_in_fp32 ....................... False 0: bert_binary_head ................................ True 0: bert_load ....................................... None 0: bf16 ............................................ True 0: bias_dropout_fusion ............................. True 0: bias_gelu_fusion ................................ True 0: biencoder_projection_dim ........................ 0 0: biencoder_shared_query_context_model ............ False 0: block_data_path ................................. None 0: checkpoint_activations .......................... False 0: checkpoint_in_cpu ............................... False 0: checkpoint_num_layers ........................... 1 0: clip_grad ....................................... 1.0 0: codecarbon_dir .................................. None 0: consumed_train_samples .......................... 0 0: consumed_train_tokens ........................... 0 0: consumed_valid_samples .......................... 0 0: contigious_checkpointing ........................ False 0: cpu_optimizer ................................... False 0: cpu_torch_adam .................................. False 0: curriculum_learning ............................. False 0: data_impl ....................................... mmap 0: data_parallel_size .............................. 64 0: data_path ....................................... None 0: dataloader_type ................................. single 0: DDP_impl ........................................ local 0: decoder_seq_length .............................. None 0: deepscale ....................................... False 0: deepscale_config ................................ None 0: deepspeed ....................................... True 0: deepspeed_activation_checkpointing .............. False 0: deepspeed_config ................................ ds_configs/3326484.json 0: deepspeed_mpi ................................... False 0: distribute_checkpointed_activations ............. False 0: distributed_backend ............................. nccl 0: embed_layernorm ................................. False 0: embedding_path .................................. None 0: encoder_seq_length .............................. 2048 0: eod_mask_loss ................................... False 0: eval_interval ................................... 1000 0: eval_iters ...................................... 1 0: eval_only ....................................... None 0: evidence_data_path .............................. None 0: exit_duration_in_mins ........................... None 0: exit_interval ................................... None 0: ffn_hidden_size ................................. 3072 0: finetune ........................................ False 0: fp16 ............................................ False 0: fp16_lm_cross_entropy ........................... False 0: fp32_residual_connection ........................ False 0: gigaflos_no_embeds .............................. 0 0: global_batch_size ............................... 256 0: glu_activation .................................. None 0: hidden_dropout .................................. 0.1 0: hidden_size ..................................... 768 0: hysteresis ...................................... 2 0: ict_head_size ................................... None 0: ict_load ........................................ None 0: img_dim ......................................... 224 0: indexer_batch_size .............................. 128 0: indexer_log_interval ............................ 1000 0: inference ....................................... False 0: init_method_std ................................. 0.02 0: init_method_xavier_uniform ...................... False 0: initial_loss_scale .............................. 4294967296 0: kill_switch_path ................................ kill-switch-146m3b9100mdedup 0: kv_channels ..................................... 64 0: layer_norm_fusion ............................... True 0: layernorm_epsilon ............................... 1e-05 0: lazy_mpu_init ................................... None 0: load ............................................ checkpoints_146m3b9100mdedup 0: local_rank ...................................... None 0: log_batch_size_to_tensorboard ................... True 0: log_interval .................................... 10 0: log_learning_rate_to_tensorboard ................ True 0: log_level ....................................... None 0: log_level_replica ............................... None 0: log_loss_scale_to_tensorboard ................... True 0: log_num_zeros_in_grad ........................... False 0: log_params_norm ................................. False 0: log_path ........................................ None 0: log_timers_to_tensorboard ....................... True 0: log_validation_ppl_to_tensorboard ............... True 0: loss_on_targets_only ............................ False 0: loss_scale ...................................... None 0: loss_scale_window ............................... 1000 0: lr .............................................. 0.0002 0: lr_decay_iters .................................. None 0: lr_decay_samples ................................ 1922149 0: lr_decay_style .................................. cosine 0: lr_decay_tokens ................................. None 0: lr_warmup_fraction .............................. None 0: lr_warmup_iters ................................. 0 0: lr_warmup_samples ............................... 19221 0: make_vocab_size_divisible_by .................... 128 0: mask_prob ....................................... 0.15 0: masked_softmax_fusion ........................... True 0: max_position_embeddings ......................... 2048 0: mean_noise_span_length .......................... None 0: memory_centric_tiled_linear ..................... False 0: merge_file ...................................... gpt2/merges.txt 0: micro_batch_size ................................ 4 0: min_loss_scale .................................. 1.0 0: min_lr .......................................... 2e-05 0: mmap_warmup ..................................... False 0: no_load_optim ................................... None 0: no_load_rng ..................................... None 0: no_save_optim ................................... None 0: no_save_rng ..................................... None 0: noise_density ................................... None 0: num_attention_heads ............................. 12 0: num_channels .................................... 3 0: num_classes ..................................... 1000 0: num_layers ...................................... 15 0: num_layers_per_virtual_pipeline_stage ........... None 0: num_workers ..................................... 2 0: onnx_safe ....................................... None 0: openai_gelu ..................................... False 0: optimizer ....................................... adam 0: optimizer_fusion ................................ True 0: override_lr_scheduler ........................... False 0: pad_vocab_size_to ............................... None 0: params_dtype .................................... torch.bfloat16 0: partition_activations ........................... False 0: patch_dim ....................................... 16 0: pipeline_model_parallel_size .................... 1 0: position_embedding_type ......................... PositionEmbeddingType.absolute 0: pp_partition_method ............................. None 0: profile_backward ................................ False 0: query_in_block_prob ............................. 0.1 0: rampup_batch_size ............................... None 0: rank ............................................ 0 0: remote_device ................................... none 0: reset_attention_mask ............................ False 0: reset_position_ids .............................. False 0: reset_progress .................................. None 0: retriever_report_topk_accuracies ................ [] 0: retriever_score_scaling ......................... False 0: retriever_seq_length ............................ 256 0: reweight_loss_based_on_position_frequency ....... False 0: sample_rate ..................................... 1.0 0: save ............................................ checkpoints_146m3b9100mdedup 0: save_interval ................................... 5000 0: scatter_gather_tensors_in_pipeline .............. True 0: scattered_embeddings ............................ False 0: seed ............................................ 1234 0: seq_length ...................................... 2048 0: sgd_momentum .................................... 0.9 0: short_seq_prob .................................. 0.1 0: skip_train_iteration_range ...................... None 0: split ........................................... None 0: split_transformers .............................. False 0: sync_tp_duplicated_parameters ................... False 0: synchronize_each_layer .......................... False 0: tensor_model_parallel_size ...................... 1 0: tensorboard_dir ................................. tensorboard_146m3b9100mdedup 0: tensorboard_log_interval ........................ 1 0: tensorboard_queue_size .......................... 5 0: test_weighted_split_paths ....................... None 0: test_weighted_split_paths_path .................. None 0: tile_factor ..................................... 1 0: titles_data_path ................................ None 0: tokenizer_name_or_path .......................... None 0: tokenizer_type .................................. GPT2BPETokenizer 0: train_iters ..................................... None 0: train_samples ................................... 1922149 0: train_tokens .................................... None 0: train_weighted_split_names ...................... ['train'] 0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document']] 0: train_weighted_split_paths_path ................. None 0: train_weighted_split_splits ..................... [['0:1']] 0: train_weighted_split_weights .................... [['1.0']] 0: universal_checkpoint ............................ False 0: use_bnb_optimizer ............................... False 0: use_checkpoint_lr_scheduler ..................... False 0: use_contiguous_buffers_in_ddp ................... True 0: use_cpu_initialization .......................... None 0: use_one_sent_docs ............................... False 0: use_pin_memory .................................. False 0: valid_num_workers ............................... 2 0: valid_weighted_split_names ...................... ['validation'] 0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']] 0: valid_weighted_split_paths_path ................. None 0: valid_weighted_split_splits ..................... [['0:1']] 0: valid_weighted_split_weights .................... [['1.0']] 0: virtual_pipeline_model_parallel_size ............ None 0: vocab_extra_ids ................................. 0 0: vocab_file ...................................... gpt2/vocab.json 0: weight_decay .................................... 0.1 0: world_size ...................................... 64 0: zero_allgather_bucket_size ...................... 0.0 0: zero_contigious_gradients ....................... False 0: zero_reduce_bucket_size ......................... 0.0 0: zero_reduce_scatter ............................. False 0: zero_stage ...................................... 0 0: -------------------- end of arguments --------------------- 0: setting number of micro-batches to constant 1 0: > building GPT2BPETokenizer tokenizer ... 7: > setting tensorboard ... 0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) 0: DeepSpeed general environment info: 0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] 0: torch version .................... 1.13.0+rocm5.2 0: torch cuda version ............... None 0: torch hip version ................ 5.2.21151-afdc89f8 0: nvcc version ..................... None 0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] 0: deepspeed info ................... 0.7.5, unknown, unknown 0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 0: > initializing torch distributed ... 0: [2023-03-16 22:50:27,261] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 0: > initializing tensor model parallel with size 1 0: > initializing pipeline model parallel with size 1 0: > setting random seeds to 1234 ... 0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 0: > compiling dataset index builder ... 0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: make: Nothing to be done for 'default'. 0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: >>> done with dataset index builder. Compilation time: 0.088 seconds 0: > compiling and loading fused kernels ... 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 102 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 67 0: ninja: no work to do. 0: >>> done with compiling and loading fused kernels. Compilation time: 24.905 seconds 0: time to initialize megatron (seconds): 46.970 0: [after megatron is initialized] datetime: 2023-03-16 22:50:54 0: building GPT model ... 0: [2023-03-16 22:50:55,107] [INFO] [utils.py:827:see_memory_usage] Before Building Model 0: [2023-03-16 22:50:55,108] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 0: [2023-03-16 22:50:55,108] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.16 GB, percent = 6.2% 0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15, ProcessCoord(pipe=0, data=16, model=0): 16, ProcessCoord(pipe=0, data=17, model=0): 17, ProcessCoord(pipe=0, data=18, model=0): 18, ProcessCoord(pipe=0, data=19, model=0): 19, ProcessCoord(pipe=0, data=20, model=0): 20, ProcessCoord(pipe=0, data=21, model=0): 21, ProcessCoord(pipe=0, data=22, model=0): 22, ProcessCoord(pi 0: pe=0, data=23, model=0): 23, ProcessCoord(pipe=0, data=24, model=0): 24, ProcessCoord(pipe=0, data=25, model=0): 25, ProcessCoord(pipe=0, data=26, model=0): 26, ProcessCoord(pipe=0, data=27, model=0): 27, ProcessCoord(pipe=0, data=28, model=0): 28, ProcessCoord(pipe=0, data=29, model=0): 29, ProcessCoord(pipe=0, data=30, model=0): 30, ProcessCoord(pipe=0, data=31, model=0): 31, ProcessCoord(pipe=0, data=32, model=0): 32, ProcessCoord(pipe=0, data=33, model=0): 33, ProcessCoord(pipe=0, data=34, model=0): 34, ProcessCoord(pipe=0, data=35, model=0): 35, ProcessCoord(pipe=0, data=36, model=0): 36, ProcessCoord(pipe=0, data=37, model=0): 37, ProcessCoord(pipe=0, data=38, model=0): 38, ProcessCoord(pipe=0, data=39, model=0): 39, ProcessCoord(pipe=0, data=40, model=0): 40, ProcessCoord(pipe=0, data=41, model=0): 41, ProcessCoord(pipe=0, data=42, model=0): 42, ProcessCoord(pipe=0, data=43, model=0): 43, ProcessCoord(pipe=0, data=44, model=0): 44, ProcessCoord(pipe=0, data=45, model=0): 45, ProcessCoord(pipe=0, data=4 0: 6, model=0): 46, ProcessCoord(pipe=0, data=47, model=0): 47, ProcessCoord(pipe=0, data=48, model=0): 48, ProcessCoord(pipe=0, data=49, model=0): 49, ProcessCoord(pipe=0, data=50, model=0): 50, ProcessCoord(pipe=0, data=51, model=0): 51, ProcessCoord(pipe=0, data=52, model=0): 52, ProcessCoord(pipe=0, data=53, model=0): 53, ProcessCoord(pipe=0, data=54, model=0): 54, ProcessCoord(pipe=0, data=55, model=0): 55, ProcessCoord(pipe=0, data=56, model=0): 56, ProcessCoord(pipe=0, data=57, model=0): 57, ProcessCoord(pipe=0, data=58, model=0): 58, ProcessCoord(pipe=0, data=59, model=0): 59, ProcessCoord(pipe=0, data=60, model=0): 60, ProcessCoord(pipe=0, data=61, model=0): 61, ProcessCoord(pipe=0, data=62, model=0): 62, ProcessCoord(pipe=0, data=63, model=0): 63} 0: [2023-03-16 22:50:57,136] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer 0: stage=0 layers=22 0: 0: _to_float16 0: 1: EmbeddingPipe 0: 2: 0: 3: ParallelTransformerLayerPipe 0: 4: ParallelTransformerLayerPipe 0: 5: ParallelTransformerLayerPipe 0: 6: ParallelTransformerLayerPipe 0: 7: ParallelTransformerLayerPipe 0: 8: ParallelTransformerLayerPipe 0: 9: ParallelTransformerLayerPipe 0: 10: ParallelTransformerLayerPipe 0: 11: ParallelTransformerLayerPipe 0: 12: ParallelTransformerLayerPipe 0: 13: ParallelTransformerLayerPipe 0: 14: ParallelTransformerLayerPipe 0: 15: ParallelTransformerLayerPipe 0: 16: ParallelTransformerLayerPipe 0: 17: ParallelTransformerLayerPipe 0: 18: undo 0: 19: MixedFusedLayerNorm 0: 20: EmbeddingPipe 0: 21: float16_to_fp32 0: loss: CrossEntropy 0: [2023-03-16 22:50:57,431] [INFO] [utils.py:827:see_memory_usage] After Building Model 0: [2023-03-16 22:50:57,432] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.28 GB CA 0.29 GB Max_CA 0 GB 0: [2023-03-16 22:50:57,432] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.17 GB, percent = 6.2% 0: setting training iterations to 7508 0: > learning rate decay style: cosine 0: DeepSpeed is enabled. 0: [2023-03-16 22:50:57,433] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown 0: [2023-03-16 22:51:10,585] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 0: [2023-03-16 22:51:10,586] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer 0: [2023-03-16 22:51:10,586] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer 0: [2023-03-16 22:51:10,596] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam 0: [2023-03-16 22:51:10,596] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer 0: [2023-03-16 22:51:10,719] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer 0: [2023-03-16 22:51:10,720] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.29 GB CA 0.31 GB Max_CA 0 GB 0: [2023-03-16 22:51:10,720] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.86 GB, percent = 6.3% 3: Time to load utils op: 0.42646074295043945 secondsTime to load utils op: 0.4268178939819336 seconds 3: 3: Time to load utils op: 0.42646074295043945 seconds 3: Time to load utils op: 0.4264249801635742 seconds 3: Time to load utils op: 0.4264519214630127 seconds 3: Time to load utils op: 0.4263582229614258 secondsTime to load utils op: 0.4264860153198242 seconds 3: 0: Time to load utils op: 0.4278383255004883 secondsTime to load utils op: 0.43289804458618164 seconds 0: 0: Time to load utils op: 0.4277992248535156 seconds 0: Time to load utils op: 0.4274742603302002 seconds 0: Time to load utils op: 0.42772531509399414 seconds 0: Time to load utils op: 0.4267270565032959 secondsTime to load utils op: 0.42603158950805664 seconds 0: 2: Time to load utils op: 0.4315505027770996 seconds 2: Time to load utils op: 0.43105173110961914 seconds 2: Time to load utils op: 0.43111276626586914 seconds 2: Time to load utils op: 0.43125343322753906 seconds 2: Time to load utils op: 0.4318959712982178 seconds 2: Time to load utils op: 0.43160462379455566 seconds 2: Time to load utils op: 0.43132877349853516 secondsTime to load utils op: 0.43208932876586914 seconds 2: 4: Time to load utils op: 0.42600226402282715 seconds 4: Time to load utils op: 0.4256904125213623 seconds 4: Time to load utils op: 0.42870569229125977 seconds 4: Time to load utils op: 0.4287099838256836 seconds 4: Time to load utils op: 0.4265773296356201 secondsTime to load utils op: 0.4286229610443115 secondsTime to load utils op: 0.42500901222229004 seconds 4: 4: 7: Time to load utils op: 0.4268064498901367 secondsTime to load utils op: 0.4279801845550537 seconds 7: 7: Time to load utils op: 0.42656731605529785 secondsTime to load utils op: 0.4267587661743164 seconds 7: 7: Time to load utils op: 0.4267737865447998 seconds 7: Time to load utils op: 0.42801809310913086 secondsTime to load utils op: 0.42774295806884766 seconds 7: 7: Time to load utils op: 0.42688536643981934 seconds 1: Time to load utils op: 0.4304354190826416 seconds 1: Time to load utils op: 0.4304618835449219 seconds 1: Time to load utils op: 0.43045902252197266 secondsTime to load utils op: 0.4304647445678711 seconds 1: 1: Time to load utils op: 0.43047308921813965 seconds 1: Time to load utils op: 0.43047404289245605 seconds 1: Time to load utils op: 0.43048882484436035 secondsTime to load utils op: 0.43048524856567383 seconds 1: 5: Time to load utils op: 0.43106889724731445 seconds 5: Time to load utils op: 0.4310176372528076 seconds 5: Time to load utils op: 0.4310295581817627 seconds 5: Time to load utils op: 0.43107008934020996 seconds 5: Time to load utils op: 0.4310615062713623 seconds 5: Time to load utils op: 0.4311516284942627 secondsTime to load utils op: 0.43105173110961914 seconds 5: 5: Time to load utils op: 0.43135571479797363 seconds 6: Time to load utils op: 0.4276125431060791 seconds 6: Time to load utils op: 0.4276237487792969 seconds 6: Time to load utils op: 0.4276292324066162 seconds 6: Time to load utils op: 0.42763829231262207 seconds 6: Time to load utils op: 0.42765116691589355 secondsTime to load utils op: 0.427654504776001 seconds 6: 6: Time to load utils op: 0.4276607036590576 seconds 6: Time to load utils op: 0.42765283584594727 seconds 3: Time to load utils op: 0.5053045749664307 seconds 4: Time to load utils op: 0.5048971176147461 seconds 0: Time to load utils op: 0.40378832817077637 seconds 5: Time to load utils op: 0.001032114028930664 seconds 7: Time to load utils op: 0.0008511543273925781 seconds 0: Time to load utils op: 0.0006549358367919922 secondsTime to load utils op: 0.0006873607635498047 secondsTime to load utils op: 0.0006270408630371094 seconds 0: Time to load utils op: 0.0006544589996337891 seconds 0: 0: Time to load utils op: 0.0004749298095703125 seconds 0: 5: Time to load utils op: 0.0011653900146484375 seconds 0: Time to load utils op: 0.0006244182586669922 seconds 7: Time to load utils op: 0.0011491775512695312 seconds 0: Time to load utils op: 0.0006120204925537109 seconds 7: Time to load utils op: 0.0011260509490966797 seconds 5: Time to load utils op: 0.0012984275817871094 seconds 5: Time to load utils op: 0.00127410888671875 secondsTime to load utils op: 0.0012860298156738281 seconds 5: 7: Time to load utils op: 0.0012102127075195312 seconds 5: Time to load utils op: 0.0012814998626708984 seconds 7: Time to load utils op: 0.0012867450714111328 seconds 5: Time to load utils op: 0.0013308525085449219 seconds 2: Time to load utils op: 0.0010285377502441406 seconds 5: Time to load utils op: 0.001310586929321289 seconds 7: Time to load utils op: 0.0012145042419433594 seconds 7: Time to load utils op: 0.0012886524200439453 seconds 2: Time to load utils op: 0.0011622905731201172 seconds 2: Time to load utils op: 0.0011632442474365234 seconds 2: Time to load utils op: 0.0011227130889892578 seconds 7: Time to load utils op: 0.00038433074951171875 seconds 2: Time to load utils op: 0.0011556148529052734 secondsTime to load utils op: 0.0011553764343261719 seconds 2: 2: Time to load utils op: 0.0011506080627441406 secondsTime to load utils op: 0.0012021064758300781 seconds 2: 6: Time to load utils op: 0.0008807182312011719 seconds 6: Time to load utils op: 0.0012981891632080078 seconds 6: Time to load utils op: 0.0013020038604736328 seconds 6: Time to load utils op: 0.0013289451599121094 seconds 6: Time to load utils op: 0.0012407302856445312 seconds 6: Time to load utils op: 0.0012428760528564453 seconds 6: Time to load utils op: 0.0012733936309814453 seconds 6: Time to load utils op: 0.0013184547424316406 seconds 3: Time to load utils op: 0.00036787986755371094 seconds 3: Time to load utils op: 0.0004622936248779297 seconds 3: Time to load utils op: 0.0004239082336425781 seconds 3: Time to load utils op: 0.00041174888610839844 seconds 4: Time to load utils op: 0.0004379749298095703 seconds 4: Time to load utils op: 0.0004558563232421875 seconds 3: Time to load utils op: 0.00048732757568359375 seconds 3: Time to load utils op: 0.0004010200500488281 secondsTime to load utils op: 0.0004086494445800781 seconds 3: 3: Time to load utils op: 0.00040435791015625 seconds 4: Time to load utils op: 0.00042629241943359375 seconds 4: Time to load utils op: 0.00040268898010253906 seconds 4: Time to load utils op: 0.0003993511199951172 seconds 4: Time to load utils op: 0.000400543212890625 seconds 4: Time to load utils op: 0.0003960132598876953 secondsTime to load utils op: 0.0003921985626220703 seconds 4: 1: Time to load utils op: 0.0010962486267089844 seconds 1: Time to load utils op: 0.0011758804321289062 seconds 1: Time to load utils op: 0.0013360977172851562 secondsTime to load utils op: 0.0013720989227294922 seconds 1: 1: Time to load utils op: 0.0013425350189208984 seconds 1: Time to load utils op: 0.0013804435729980469 seconds 1: Time to load utils op: 0.0013725757598876953 seconds 1: Time to load utils op: 0.0013818740844726562 seconds 0: [2023-03-16 22:51:11,246] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 0: [2023-03-16 22:51:11,247] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.28 GB CA 0.31 GB Max_CA 0 GB 0: [2023-03-16 22:51:11,247] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,364] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 0: [2023-03-16 22:51:11,365] [INFO] [utils.py:828:see_memory_usage] MA 0.62 GB Max_MA 0.62 GB CA 0.82 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,365] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,468] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 0: [2023-03-16 22:51:11,469] [INFO] [utils.py:828:see_memory_usage] MA 0.62 GB Max_MA 0.62 GB CA 0.82 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,469] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,574] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 0: [2023-03-16 22:51:11,574] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,574] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,677] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 0: [2023-03-16 22:51:11,677] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,678] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,784] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 0: [2023-03-16 22:51:11,784] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,784] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,887] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer 0: [2023-03-16 22:51:11,887] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,887] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:11,995] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer 0: [2023-03-16 22:51:11,996] [INFO] [utils.py:828:see_memory_usage] MA 0.85 GB Max_MA 0.85 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-16 22:51:11,996] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:12,099] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer 0: [2023-03-16 22:51:12,100] [INFO] [utils.py:828:see_memory_usage] MA 0.85 GB Max_MA 0.85 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-16 22:51:12,100] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.0 GB, percent = 6.4% 0: [2023-03-16 22:51:12,100] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam 0: [2023-03-16 22:51:12,100] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler 0: [2023-03-16 22:51:12,100] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = 0: [2023-03-16 22:51:12,101] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 0: [2023-03-16 22:51:12,101] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: 0: [2023-03-16 22:51:12,101] [INFO] [config.py:1011:print] activation_checkpointing_config { 0: "partition_activations": false, 0: "contiguous_memory_optimization": false, 0: "cpu_checkpointing": false, 0: "number_checkpoints": null, 0: "synchronize_checkpoint_boundary": false, 0: "profile": false 0: } 0: [2023-03-16 22:51:12,101] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 0: [2023-03-16 22:51:12,101] [INFO] [config.py:1011:print] amp_enabled .................. False 0: [2023-03-16 22:51:12,101] [INFO] [config.py:1011:print] amp_params ................... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] autotuning_config ............ { 0: "enabled": false, 0: "start_step": null, 0: "end_step": null, 0: "metric_path": null, 0: "arg_mappings": null, 0: "metric": "throughput", 0: "model_info": null, 0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", 0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", 0: "overwrite": true, 0: "fast": true, 0: "start_profile_step": 3, 0: "end_profile_step": 5, 0: "tuner_type": "gridsearch", 0: "tuner_early_stopping": 5, 0: "tuner_num_trials": 50, 0: "model_info_path": null, 0: "mp_size": 1, 0: "max_train_batch_size": null, 0: "min_train_batch_size": 1, 0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 0: "min_train_micro_batch_size_per_gpu": 1, 0: "num_tuning_micro_batch_sizes": 3 0: } 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] bfloat16_enabled ............. True 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] comms_config ................. 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] communication_data_type ...... None 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa 0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] curriculum_enabled ........... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] curriculum_params ............ False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] dataloader_drop_last ......... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] disable_allgather ............ False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] dump_state ................... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] elasticity_enabled ........... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] flops_profiler_config ........ { 0: "enabled": false, 0: "profile_step": 1, 0: "module_depth": -1, 0: "top_modules": 1, 0: "detailed": true, 0: "output_file": null 0: } 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] fp16_auto_cast ............... None 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] fp16_enabled ................. False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] global_rank .................. 0 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] load_universal_checkpoint .... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] loss_scale ................... 1.0 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] memory_breakdown ............. False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] monitor_config ............... 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] nebula_config ................ { 0: "enabled": false, 0: "persistent_storage_path": null, 0: "persistent_time_interval": 100, 0: "num_of_version_in_retention": 2, 0: "enable_nebula_load": true, 0: "load_path": null 0: } 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False 0: [2023-03-16 22:51:12,102] [INFO] [config.py:1011:print] optimizer_name ............... None 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] optimizer_params ............. None 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] pld_enabled .................. False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] pld_params ................... False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] prescale_gradients ........... False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] scheduler_name ............... None 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] scheduler_params ............. None 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] sparse_attention ............. None 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] steps_per_print .............. 2000 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] train_batch_size ............. 256 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 4 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] use_node_local_storage ....... False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] world_size ................... 64 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] zero_enabled ................. False 0: [2023-03-16 22:51:12,103] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 0: [2023-03-16 22:51:12,103] [INFO] [config.py:996:print_user_config] json = { 0: "train_micro_batch_size_per_gpu": 4, 0: "train_batch_size": 256, 0: "gradient_clipping": 1.0, 0: "zero_optimization": { 0: "stage": 0 0: }, 0: "bf16": { 0: "enabled": true 0: }, 0: "steps_per_print": 2.000000e+03, 0: "wall_clock_breakdown": false 0: } 0: Time to load utils op: 0.00042366981506347656 seconds 0: [2023-03-16 22:51:12,104] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=4 0: [2023-03-16 22:51:12,179] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=22 [0, 22) STAGE_PARAMS=146525952 (146.526M) TOTAL_PARAMS=146525952 (146.526M) UNIQUE_PARAMS=146525952 (146.526M) 6: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: WARNING: could not find the metadata file checkpoints_146m3b9100mdedup 5: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: will not load any checkpoints and will start from random 7: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,185] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-16 22:51:12,186] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m3b9100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: time (ms) | load-checkpoint: 8.40 0: estimated model parameters: 0.146525952 0: estimated model parameters without embeddings: 0.106319616 0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-03-16 22:51:13 0: > building train, validation, and test datasets ... 0: > datasets target sizes (minimum size): 0: train: 1922149 0: validation: 2048 0: test: 256 0: > building train, validation, and test datasets for GPT ... 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.015139 seconds 0: number of documents: 409500 0: > dataset split: 0: train: 0: document indices in [0, 409500) total of 409500 documents 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > last epoch number of samples (39164) is smaller than 95.0% of number of samples per epoch (48281), setting separate_last_epoch to True 0: > elasped time to build and save doc-idx mapping (seconds): 0.803222 0: using: 0: number of documents: 409500 0: number of epochs: 40 0: sequence length: 2048 0: total number of samples: 1931266 0: > elasped time to build and save sample-idx mapping (seconds): 0.061698 0: > building shuffle index with split [0, 1882985) and [1882985, 1931266) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 0.047524 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_1922149ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_1922149ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_1922149ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.017 seconds 0: total number of samples: 1931267 0: total number of epochs: 40 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.061236 seconds 0: number of documents: 364608 0: > dataset split: 0: validation: 0: document indices in [0, 364608) total of 364608 documents 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_2048ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_2048ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_2048ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.085 seconds 0: total number of samples: 84978 0: total number of epochs: 1 0: > finished creating GPT datasets ... 0: [after dataloaders are built] datetime: 2023-03-16 22:51:28 0: done with setup ... 0: training ... 0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: 7: time (ms) | model-and-optimizer-setup: 17852.01 | train/valid/test-data-iterators-setup: 14942.84 0: [000-000] 0.1465B / 0.1063B 0: [before the start of training step] datetime: 2023-03-16 22:51:28 0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 2734.17236328125 | max allocated: 22586.583984375 | reserved: 23360.0 | max reserved: 23360.0 7: iteration 10/ 7508 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 1.56 | learning rate: 2.664E-05 | global batch size: 256 | lm loss: 1.041663E+01 | grad norm: 2.555 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 164.245 | TFLOPs: 5.75 | 7: iteration 20/ 7508 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 0.31 | learning rate: 5.328E-05 | global batch size: 256 | lm loss: 9.452387E+00 | grad norm: 2.020 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 826.367 | TFLOPs: 28.93 | 7: iteration 30/ 7508 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 0.31 | learning rate: 7.991E-05 | global batch size: 256 | lm loss: 8.858395E+00 | grad norm: 1.867 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 836.871 | TFLOPs: 29.30 | 7: iteration 40/ 7508 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 0.31 | learning rate: 1.066E-04 | global batch size: 256 | lm loss: 8.176444E+00 | grad norm: 1.220 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 817.972 | TFLOPs: 28.63 | 7: iteration 50/ 7508 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 0.32 | learning rate: 1.332E-04 | global batch size: 256 | lm loss: 7.543407E+00 | grad norm: 0.926 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 792.251 | TFLOPs: 27.73 | 7: iteration 60/ 7508 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 0.32 | learning rate: 1.598E-04 | global batch size: 256 | lm loss: 7.128316E+00 | grad norm: 0.976 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 794.095 | TFLOPs: 27.80 | 7: iteration 70/ 7508 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 0.33 | learning rate: 1.865E-04 | global batch size: 256 | lm loss: 6.912889E+00 | grad norm: 0.757 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 782.560 | TFLOPs: 27.40 | 7: iteration 80/ 7508 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 0.32 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.741431E+00 | grad norm: 0.904 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 808.677 | TFLOPs: 28.31 | 7: iteration 90/ 7508 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 0.31 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.605281E+00 | grad norm: 0.426 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 818.628 | TFLOPs: 28.66 | 7: iteration 100/ 7508 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 0.33 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.490516E+00 | grad norm: 0.276 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 779.439 | TFLOPs: 27.29 | 7: iteration 110/ 7508 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 0.36 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.432279E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 716.297 | TFLOPs: 25.08 | 7: iteration 120/ 7508 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 0.32 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.369866E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 807.256 | TFLOPs: 28.26 | 7: iteration 130/ 7508 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 0.32 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.318628E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 801.222 | TFLOPs: 28.05 | 7: iteration 140/ 7508 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 0.32 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.283165E+00 | grad norm: 0.335 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 796.964 | TFLOPs: 27.90 | 7: iteration 150/ 7508 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 0.32 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.243309E+00 | grad norm: 0.502 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 799.836 | TFLOPs: 28.00 | 7: iteration 160/ 7508 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 0.31 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.227382E+00 | grad norm: 0.336 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 820.618 | TFLOPs: 28.73 | 7: iteration 170/ 7508 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 0.31 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.174668E+00 | grad norm: 0.383 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.358 | TFLOPs: 29.31 | 7: iteration 180/ 7508 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 0.31 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.152316E+00 | grad norm: 0.306 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 829.826 | TFLOPs: 29.05 | 7: iteration 190/ 7508 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 0.32 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.147634E+00 | grad norm: 0.725 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 796.549 | TFLOPs: 27.88 | 7: iteration 200/ 7508 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 0.31 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.121821E+00 | grad norm: 0.345 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 839.283 | TFLOPs: 29.38 | 7: iteration 210/ 7508 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (s): 0.31 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.082714E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 819.737 | TFLOPs: 28.70 | 7: iteration 220/ 7508 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (s): 0.31 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 6.071125E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 813.772 | TFLOPs: 28.49 | 7: iteration 230/ 7508 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (s): 0.31 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 6.047761E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 832.137 | TFLOPs: 29.13 | 7: iteration 240/ 7508 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (s): 0.31 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 6.030515E+00 | grad norm: 0.762 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 834.329 | TFLOPs: 29.21 | 7: iteration 250/ 7508 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (s): 0.31 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 6.001125E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 825.086 | TFLOPs: 28.88 | 7: iteration 260/ 7508 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (s): 0.31 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 5.993174E+00 | grad norm: 0.730 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 821.824 | TFLOPs: 28.77 | 7: iteration 270/ 7508 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (s): 0.30 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 5.977674E+00 | grad norm: 0.358 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 846.214 | TFLOPs: 29.62 | 7: iteration 280/ 7508 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (s): 0.30 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 5.968029E+00 | grad norm: 1.196 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.167 | TFLOPs: 30.01 | 7: iteration 290/ 7508 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (s): 0.31 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 5.989154E+00 | grad norm: 0.369 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.363 | TFLOPs: 29.31 | 7: iteration 300/ 7508 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 0.30 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 5.931384E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 844.290 | TFLOPs: 29.56 | 7: iteration 310/ 7508 | consumed samples: 79360 | consumed tokens: 162529280 | elapsed time per iteration (s): 0.31 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 5.909166E+00 | grad norm: 0.361 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 838.994 | TFLOPs: 29.37 | 7: iteration 320/ 7508 | consumed samples: 81920 | consumed tokens: 167772160 | elapsed time per iteration (s): 0.30 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 5.883252E+00 | grad norm: 0.641 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.142 | TFLOPs: 29.94 | 7: iteration 330/ 7508 | consumed samples: 84480 | consumed tokens: 173015040 | elapsed time per iteration (s): 0.30 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 5.862987E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 845.461 | TFLOPs: 29.60 | 7: iteration 340/ 7508 | consumed samples: 87040 | consumed tokens: 178257920 | elapsed time per iteration (s): 0.31 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 5.847193E+00 | grad norm: 0.770 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 827.080 | TFLOPs: 28.95 | 7: iteration 350/ 7508 | consumed samples: 89600 | consumed tokens: 183500800 | elapsed time per iteration (s): 0.31 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 5.844314E+00 | grad norm: 0.539 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 819.498 | TFLOPs: 28.69 | 7: iteration 360/ 7508 | consumed samples: 92160 | consumed tokens: 188743680 | elapsed time per iteration (s): 0.31 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 5.791601E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 835.314 | TFLOPs: 29.24 | 7: iteration 370/ 7508 | consumed samples: 94720 | consumed tokens: 193986560 | elapsed time per iteration (s): 0.31 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 5.772537E+00 | grad norm: 0.471 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 815.436 | TFLOPs: 28.55 | 7: iteration 380/ 7508 | consumed samples: 97280 | consumed tokens: 199229440 | elapsed time per iteration (s): 0.31 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 5.737420E+00 | grad norm: 0.513 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 831.328 | TFLOPs: 29.10 | 7: iteration 390/ 7508 | consumed samples: 99840 | consumed tokens: 204472320 | elapsed time per iteration (s): 0.31 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 5.740631E+00 | grad norm: 0.538 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.334 | TFLOPs: 29.31 | 7: iteration 400/ 7508 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 0.32 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 5.701233E+00 | grad norm: 0.459 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 788.146 | TFLOPs: 27.59 | 7: iteration 410/ 7508 | consumed samples: 104960 | consumed tokens: 214958080 | elapsed time per iteration (s): 0.31 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 5.691037E+00 | grad norm: 0.714 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 827.779 | TFLOPs: 28.98 | 7: iteration 420/ 7508 | consumed samples: 107520 | consumed tokens: 220200960 | elapsed time per iteration (s): 0.30 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 5.675851E+00 | grad norm: 0.382 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 849.154 | TFLOPs: 29.73 | 7: iteration 430/ 7508 | consumed samples: 110080 | consumed tokens: 225443840 | elapsed time per iteration (s): 0.31 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 5.646352E+00 | grad norm: 0.499 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 836.597 | TFLOPs: 29.29 | 7: iteration 440/ 7508 | consumed samples: 112640 | consumed tokens: 230686720 | elapsed time per iteration (s): 0.31 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 5.620514E+00 | grad norm: 0.355 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.067 | TFLOPs: 29.30 | 7: iteration 450/ 7508 | consumed samples: 115200 | consumed tokens: 235929600 | elapsed time per iteration (s): 0.32 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 5.592965E+00 | grad norm: 0.620 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 807.084 | TFLOPs: 28.25 | 7: iteration 460/ 7508 | consumed samples: 117760 | consumed tokens: 241172480 | elapsed time per iteration (s): 0.30 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 5.581018E+00 | grad norm: 0.908 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 850.071 | TFLOPs: 29.76 | 7: iteration 470/ 7508 | consumed samples: 120320 | consumed tokens: 246415360 | elapsed time per iteration (s): 0.30 | learning rate: 1.987E-04 | global batch size: 256 | lm loss: 5.562185E+00 | grad norm: 0.367 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 844.782 | TFLOPs: 29.57 | 7: iteration 480/ 7508 | consumed samples: 122880 | consumed tokens: 251658240 | elapsed time per iteration (s): 0.31 | learning rate: 1.987E-04 | global batch size: 256 | lm loss: 5.526751E+00 | grad norm: 0.838 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 833.430 | TFLOPs: 29.18 | 7: iteration 490/ 7508 | consumed samples: 125440 | consumed tokens: 256901120 | elapsed time per iteration (s): 0.31 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 5.532006E+00 | grad norm: 0.692 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 834.966 | TFLOPs: 29.23 | 7: iteration 500/ 7508 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 0.31 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 5.514582E+00 | grad norm: 0.857 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 834.057 | TFLOPs: 29.20 | 7: iteration 510/ 7508 | consumed samples: 130560 | consumed tokens: 267386880 | elapsed time per iteration (s): 0.31 | learning rate: 1.985E-04 | global batch size: 256 | lm loss: 5.494765E+00 | grad norm: 0.674 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 838.061 | TFLOPs: 29.34 | 7: iteration 520/ 7508 | consumed samples: 133120 | consumed tokens: 272629760 | elapsed time per iteration (s): 0.31 | learning rate: 1.984E-04 | global batch size: 256 | lm loss: 5.453589E+00 | grad norm: 0.460 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 835.249 | TFLOPs: 29.24 | 7: iteration 530/ 7508 | consumed samples: 135680 | consumed tokens: 277872640 | elapsed time per iteration (s): 0.32 | learning rate: 1.983E-04 | global batch size: 256 | lm loss: 5.432267E+00 | grad norm: 0.694 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 807.450 | TFLOPs: 28.27 | 7: iteration 540/ 7508 | consumed samples: 138240 | consumed tokens: 283115520 | elapsed time per iteration (s): 0.30 | learning rate: 1.983E-04 | global batch size: 256 | lm loss: 5.426010E+00 | grad norm: 0.622 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 851.658 | TFLOPs: 29.81 | 7: iteration 550/ 7508 | consumed samples: 140800 | consumed tokens: 288358400 | elapsed time per iteration (s): 0.30 | learning rate: 1.982E-04 | global batch size: 256 | lm loss: 5.379110E+00 | grad norm: 0.689 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 839.833 | TFLOPs: 29.40 | 7: iteration 560/ 7508 | consumed samples: 143360 | consumed tokens: 293601280 | elapsed time per iteration (s): 0.32 | learning rate: 1.981E-04 | global batch size: 256 | lm loss: 5.378224E+00 | grad norm: 0.628 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 805.615 | TFLOPs: 28.20 | 7: iteration 570/ 7508 | consumed samples: 145920 | consumed tokens: 298844160 | elapsed time per iteration (s): 0.30 | learning rate: 1.980E-04 | global batch size: 256 | lm loss: 5.366313E+00 | grad norm: 0.519 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 841.107 | TFLOPs: 29.44 | 7: iteration 580/ 7508 | consumed samples: 148480 | consumed tokens: 304087040 | elapsed time per iteration (s): 0.30 | learning rate: 1.980E-04 | global batch size: 256 | lm loss: 5.335563E+00 | grad norm: 0.887 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 844.821 | TFLOPs: 29.57 | 7: iteration 590/ 7508 | consumed samples: 151040 | consumed tokens: 309329920 | elapsed time per iteration (s): 0.30 | learning rate: 1.979E-04 | global batch size: 256 | lm loss: 5.346872E+00 | grad norm: 0.687 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.090 | TFLOPs: 29.93 | 7: iteration 600/ 7508 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 0.30 | learning rate: 1.978E-04 | global batch size: 256 | lm loss: 5.292423E+00 | grad norm: 0.506 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 846.717 | TFLOPs: 29.64 | 7: iteration 610/ 7508 | consumed samples: 156160 | consumed tokens: 319815680 | elapsed time per iteration (s): 0.30 | learning rate: 1.977E-04 | global batch size: 256 | lm loss: 5.288706E+00 | grad norm: 0.836 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 845.956 | TFLOPs: 29.61 | 7: iteration 620/ 7508 | consumed samples: 158720 | consumed tokens: 325058560 | elapsed time per iteration (s): 0.31 | learning rate: 1.976E-04 | global batch size: 256 | lm loss: 5.274877E+00 | grad norm: 0.762 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 832.321 | TFLOPs: 29.14 | 7: iteration 630/ 7508 | consumed samples: 161280 | consumed tokens: 330301440 | elapsed time per iteration (s): 0.30 | learning rate: 1.975E-04 | global batch size: 256 | lm loss: 5.259012E+00 | grad norm: 0.488 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 852.983 | TFLOPs: 29.86 | 7: iteration 640/ 7508 | consumed samples: 163840 | consumed tokens: 335544320 | elapsed time per iteration (s): 0.30 | learning rate: 1.974E-04 | global batch size: 256 | lm loss: 5.230642E+00 | grad norm: 0.648 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 843.812 | TFLOPs: 29.54 | 7: iteration 650/ 7508 | consumed samples: 166400 | consumed tokens: 340787200 | elapsed time per iteration (s): 0.30 | learning rate: 1.974E-04 | global batch size: 256 | lm loss: 5.216769E+00 | grad norm: 0.576 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.774 | TFLOPs: 29.96 | 7: iteration 660/ 7508 | consumed samples: 168960 | consumed tokens: 346030080 | elapsed time per iteration (s): 0.30 | learning rate: 1.973E-04 | global batch size: 256 | lm loss: 5.197593E+00 | grad norm: 0.806 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 849.431 | TFLOPs: 29.74 | 7: iteration 670/ 7508 | consumed samples: 171520 | consumed tokens: 351272960 | elapsed time per iteration (s): 0.30 | learning rate: 1.972E-04 | global batch size: 256 | lm loss: 5.194070E+00 | grad norm: 0.613 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.105 | TFLOPs: 29.93 | 7: iteration 680/ 7508 | consumed samples: 174080 | consumed tokens: 356515840 | elapsed time per iteration (s): 0.30 | learning rate: 1.971E-04 | global batch size: 256 | lm loss: 5.167617E+00 | grad norm: 1.029 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 851.880 | TFLOPs: 29.82 | 7: iteration 690/ 7508 | consumed samples: 176640 | consumed tokens: 361758720 | elapsed time per iteration (s): 0.30 | learning rate: 1.970E-04 | global batch size: 256 | lm loss: 5.152538E+00 | grad norm: 0.610 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.130 | TFLOPs: 30.18 | 7: iteration 700/ 7508 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 0.31 | learning rate: 1.969E-04 | global batch size: 256 | lm loss: 5.148540E+00 | grad norm: 0.548 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 835.711 | TFLOPs: 29.26 | 7: iteration 710/ 7508 | consumed samples: 181760 | consumed tokens: 372244480 | elapsed time per iteration (s): 0.30 | learning rate: 1.968E-04 | global batch size: 256 | lm loss: 5.135507E+00 | grad norm: 0.828 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 861.528 | TFLOPs: 30.16 | 7: iteration 720/ 7508 | consumed samples: 184320 | consumed tokens: 377487360 | elapsed time per iteration (s): 0.30 | learning rate: 1.967E-04 | global batch size: 256 | lm loss: 5.114986E+00 | grad norm: 0.964 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.641 | TFLOPs: 30.23 | 7: iteration 730/ 7508 | consumed samples: 186880 | consumed tokens: 382730240 | elapsed time per iteration (s): 0.30 | learning rate: 1.966E-04 | global batch size: 256 | lm loss: 5.093393E+00 | grad norm: 0.547 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 854.361 | TFLOPs: 29.91 | 7: iteration 740/ 7508 | consumed samples: 189440 | consumed tokens: 387973120 | elapsed time per iteration (s): 0.30 | learning rate: 1.965E-04 | global batch size: 256 | lm loss: 5.081531E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 847.157 | TFLOPs: 29.66 | 7: iteration 750/ 7508 | consumed samples: 192000 | consumed tokens: 393216000 | elapsed time per iteration (s): 0.30 | learning rate: 1.964E-04 | global batch size: 256 | lm loss: 5.069793E+00 | grad norm: 0.718 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.902 | TFLOPs: 30.21 | 7: iteration 760/ 7508 | consumed samples: 194560 | consumed tokens: 398458880 | elapsed time per iteration (s): 0.31 | learning rate: 1.963E-04 | global batch size: 256 | lm loss: 5.057330E+00 | grad norm: 0.615 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 832.825 | TFLOPs: 29.15 | 7: iteration 770/ 7508 | consumed samples: 197120 | consumed tokens: 403701760 | elapsed time per iteration (s): 0.30 | learning rate: 1.961E-04 | global batch size: 256 | lm loss: 5.035075E+00 | grad norm: 0.513 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.412 | TFLOPs: 30.12 | 7: iteration 780/ 7508 | consumed samples: 199680 | consumed tokens: 408944640 | elapsed time per iteration (s): 0.30 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 5.034676E+00 | grad norm: 0.746 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.991 | TFLOPs: 30.00 | 7: iteration 790/ 7508 | consumed samples: 202240 | consumed tokens: 414187520 | elapsed time per iteration (s): 0.29 | learning rate: 1.959E-04 | global batch size: 256 | lm loss: 5.018583E+00 | grad norm: 0.479 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.583 | TFLOPs: 30.48 | 7: iteration 800/ 7508 | consumed samples: 204800 | consumed tokens: 419430400 | elapsed time per iteration (s): 0.30 | learning rate: 1.958E-04 | global batch size: 256 | lm loss: 5.003476E+00 | grad norm: 0.542 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 850.260 | TFLOPs: 29.77 | 7: iteration 810/ 7508 | consumed samples: 207360 | consumed tokens: 424673280 | elapsed time per iteration (s): 0.30 | learning rate: 1.957E-04 | global batch size: 256 | lm loss: 4.993603E+00 | grad norm: 0.682 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 854.535 | TFLOPs: 29.91 | 7: iteration 820/ 7508 | consumed samples: 209920 | consumed tokens: 429916160 | elapsed time per iteration (s): 0.30 | learning rate: 1.956E-04 | global batch size: 256 | lm loss: 4.966439E+00 | grad norm: 0.708 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.100 | TFLOPs: 30.11 | 7: iteration 830/ 7508 | consumed samples: 212480 | consumed tokens: 435159040 | elapsed time per iteration (s): 0.30 | learning rate: 1.955E-04 | global batch size: 256 | lm loss: 4.970242E+00 | grad norm: 0.714 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.483 | TFLOPs: 30.23 | 7: iteration 840/ 7508 | consumed samples: 215040 | consumed tokens: 440401920 | elapsed time per iteration (s): 0.31 | learning rate: 1.953E-04 | global batch size: 256 | lm loss: 4.961034E+00 | grad norm: 0.596 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 838.675 | TFLOPs: 29.36 | 7: iteration 850/ 7508 | consumed samples: 217600 | consumed tokens: 445644800 | elapsed time per iteration (s): 0.30 | learning rate: 1.952E-04 | global batch size: 256 | lm loss: 4.929147E+00 | grad norm: 0.518 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 841.821 | TFLOPs: 29.47 | 7: iteration 860/ 7508 | consumed samples: 220160 | consumed tokens: 450887680 | elapsed time per iteration (s): 0.30 | learning rate: 1.951E-04 | global batch size: 256 | lm loss: 4.921621E+00 | grad norm: 0.706 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 847.719 | TFLOPs: 29.68 | 7: iteration 870/ 7508 | consumed samples: 222720 | consumed tokens: 456130560 | elapsed time per iteration (s): 0.29 | learning rate: 1.950E-04 | global batch size: 256 | lm loss: 4.923571E+00 | grad norm: 0.787 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.430 | TFLOPs: 30.47 | 7: iteration 880/ 7508 | consumed samples: 225280 | consumed tokens: 461373440 | elapsed time per iteration (s): 0.30 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 4.918399E+00 | grad norm: 0.517 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 845.899 | TFLOPs: 29.61 | 7: iteration 890/ 7508 | consumed samples: 227840 | consumed tokens: 466616320 | elapsed time per iteration (s): 0.29 | learning rate: 1.947E-04 | global batch size: 256 | lm loss: 4.881538E+00 | grad norm: 0.688 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.747 | TFLOPs: 30.45 | 7: iteration 900/ 7508 | consumed samples: 230400 | consumed tokens: 471859200 | elapsed time per iteration (s): 0.30 | learning rate: 1.946E-04 | global batch size: 256 | lm loss: 4.895176E+00 | grad norm: 0.736 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.218 | TFLOPs: 30.04 | 7: iteration 910/ 7508 | consumed samples: 232960 | consumed tokens: 477102080 | elapsed time per iteration (s): 0.30 | learning rate: 1.945E-04 | global batch size: 256 | lm loss: 4.869870E+00 | grad norm: 0.653 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 849.817 | TFLOPs: 29.75 | 7: iteration 920/ 7508 | consumed samples: 235520 | consumed tokens: 482344960 | elapsed time per iteration (s): 0.30 | learning rate: 1.943E-04 | global batch size: 256 | lm loss: 4.858638E+00 | grad norm: 0.710 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.728 | TFLOPs: 30.10 | 7: iteration 930/ 7508 | consumed samples: 238080 | consumed tokens: 487587840 | elapsed time per iteration (s): 0.29 | learning rate: 1.942E-04 | global batch size: 256 | lm loss: 4.828797E+00 | grad norm: 0.677 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.324 | TFLOPs: 30.47 | 7: iteration 940/ 7508 | consumed samples: 240640 | consumed tokens: 492830720 | elapsed time per iteration (s): 0.29 | learning rate: 1.941E-04 | global batch size: 256 | lm loss: 4.837301E+00 | grad norm: 0.919 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.099 | TFLOPs: 30.46 | 7: iteration 950/ 7508 | consumed samples: 243200 | consumed tokens: 498073600 | elapsed time per iteration (s): 0.30 | learning rate: 1.939E-04 | global batch size: 256 | lm loss: 4.847870E+00 | grad norm: 0.817 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.824 | TFLOPs: 30.00 | 7: iteration 960/ 7508 | consumed samples: 245760 | consumed tokens: 503316480 | elapsed time per iteration (s): 0.29 | learning rate: 1.938E-04 | global batch size: 256 | lm loss: 4.813546E+00 | grad norm: 0.510 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.024 | TFLOPs: 30.42 | 7: iteration 970/ 7508 | consumed samples: 248320 | consumed tokens: 508559360 | elapsed time per iteration (s): 0.29 | learning rate: 1.936E-04 | global batch size: 256 | lm loss: 4.787366E+00 | grad norm: 0.553 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.213 | TFLOPs: 30.43 | 7: iteration 980/ 7508 | consumed samples: 250880 | consumed tokens: 513802240 | elapsed time per iteration (s): 0.29 | learning rate: 1.935E-04 | global batch size: 256 | lm loss: 4.789575E+00 | grad norm: 0.670 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.440 | TFLOPs: 30.47 | 7: iteration 990/ 7508 | consumed samples: 253440 | consumed tokens: 519045120 | elapsed time per iteration (s): 0.30 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 4.777592E+00 | grad norm: 0.820 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.180 | TFLOPs: 30.18 | 7: iteration 1000/ 7508 | consumed samples: 256000 | consumed tokens: 524288000 | elapsed time per iteration (s): 0.30 | learning rate: 1.932E-04 | global batch size: 256 | lm loss: 4.771298E+00 | grad norm: 0.807 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.392 | TFLOPs: 29.98 | 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 1000 | lm loss value: 4.741335E+00 | lm loss PPL: 1.145871E+02 | 7: ----------------------------------------------------------------------------------------------- 7: iteration 1010/ 7508 | consumed samples: 258560 | consumed tokens: 529530880 | elapsed time per iteration (s): 0.31 | learning rate: 1.931E-04 | global batch size: 256 | lm loss: 4.757132E+00 | grad norm: 0.663 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 822.059 | TFLOPs: 28.78 | 7: iteration 1020/ 7508 | consumed samples: 261120 | consumed tokens: 534773760 | elapsed time per iteration (s): 0.31 | learning rate: 1.929E-04 | global batch size: 256 | lm loss: 4.744170E+00 | grad norm: 0.535 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 820.208 | TFLOPs: 28.71 | 7: iteration 1030/ 7508 | consumed samples: 263680 | consumed tokens: 540016640 | elapsed time per iteration (s): 0.29 | learning rate: 1.928E-04 | global batch size: 256 | lm loss: 4.722169E+00 | grad norm: 0.723 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.114 | TFLOPs: 30.46 | 7: iteration 1040/ 7508 | consumed samples: 266240 | consumed tokens: 545259520 | elapsed time per iteration (s): 0.30 | learning rate: 1.926E-04 | global batch size: 256 | lm loss: 4.723614E+00 | grad norm: 0.568 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.465 | TFLOPs: 30.19 | 7: iteration 1050/ 7508 | consumed samples: 268800 | consumed tokens: 550502400 | elapsed time per iteration (s): 0.30 | learning rate: 1.925E-04 | global batch size: 256 | lm loss: 4.715654E+00 | grad norm: 0.572 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.498 | TFLOPs: 30.09 | 7: iteration 1060/ 7508 | consumed samples: 271360 | consumed tokens: 555745280 | elapsed time per iteration (s): 0.29 | learning rate: 1.923E-04 | global batch size: 256 | lm loss: 4.699508E+00 | grad norm: 0.576 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.176 | TFLOPs: 30.50 | 7: iteration 1070/ 7508 | consumed samples: 273920 | consumed tokens: 560988160 | elapsed time per iteration (s): 0.30 | learning rate: 1.922E-04 | global batch size: 256 | lm loss: 4.700763E+00 | grad norm: 0.733 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.717 | TFLOPs: 30.24 | 7: iteration 1080/ 7508 | consumed samples: 276480 | consumed tokens: 566231040 | elapsed time per iteration (s): 0.29 | learning rate: 1.920E-04 | global batch size: 256 | lm loss: 4.678749E+00 | grad norm: 0.515 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.210 | TFLOPs: 30.46 | 7: iteration 1090/ 7508 | consumed samples: 279040 | consumed tokens: 571473920 | elapsed time per iteration (s): 0.29 | learning rate: 1.918E-04 | global batch size: 256 | lm loss: 4.676783E+00 | grad norm: 0.790 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.814 | TFLOPs: 30.48 | 7: iteration 1100/ 7508 | consumed samples: 281600 | consumed tokens: 576716800 | elapsed time per iteration (s): 0.29 | learning rate: 1.917E-04 | global batch size: 256 | lm loss: 4.675251E+00 | grad norm: 0.528 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.400 | TFLOPs: 30.51 | 7: iteration 1110/ 7508 | consumed samples: 284160 | consumed tokens: 581959680 | elapsed time per iteration (s): 0.30 | learning rate: 1.915E-04 | global batch size: 256 | lm loss: 4.653316E+00 | grad norm: 0.982 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.585 | TFLOPs: 30.09 | 7: iteration 1120/ 7508 | consumed samples: 286720 | consumed tokens: 587202560 | elapsed time per iteration (s): 0.29 | learning rate: 1.914E-04 | global batch size: 256 | lm loss: 4.666724E+00 | grad norm: 0.583 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.273 | TFLOPs: 30.47 | 7: iteration 1130/ 7508 | consumed samples: 289280 | consumed tokens: 592445440 | elapsed time per iteration (s): 0.29 | learning rate: 1.912E-04 | global batch size: 256 | lm loss: 4.646425E+00 | grad norm: 0.719 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.740 | TFLOPs: 30.48 | 7: iteration 1140/ 7508 | consumed samples: 291840 | consumed tokens: 597688320 | elapsed time per iteration (s): 0.29 | learning rate: 1.910E-04 | global batch size: 256 | lm loss: 4.627764E+00 | grad norm: 0.601 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.215 | TFLOPs: 30.50 | 7: iteration 1150/ 7508 | consumed samples: 294400 | consumed tokens: 602931200 | elapsed time per iteration (s): 0.29 | learning rate: 1.909E-04 | global batch size: 256 | lm loss: 4.635696E+00 | grad norm: 0.502 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.275 | TFLOPs: 30.50 | 7: iteration 1160/ 7508 | consumed samples: 296960 | consumed tokens: 608174080 | elapsed time per iteration (s): 0.29 | learning rate: 1.907E-04 | global batch size: 256 | lm loss: 4.605647E+00 | grad norm: 0.714 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.689 | TFLOPs: 30.48 | 7: iteration 1170/ 7508 | consumed samples: 299520 | consumed tokens: 613416960 | elapsed time per iteration (s): 0.29 | learning rate: 1.905E-04 | global batch size: 256 | lm loss: 4.616286E+00 | grad norm: 0.644 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.091 | TFLOPs: 30.46 | 7: iteration 1180/ 7508 | consumed samples: 302080 | consumed tokens: 618659840 | elapsed time per iteration (s): 0.29 | learning rate: 1.904E-04 | global batch size: 256 | lm loss: 4.603827E+00 | grad norm: 0.570 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.159 | TFLOPs: 30.50 | 7: iteration 1190/ 7508 | consumed samples: 304640 | consumed tokens: 623902720 | elapsed time per iteration (s): 0.29 | learning rate: 1.902E-04 | global batch size: 256 | lm loss: 4.592938E+00 | grad norm: 0.768 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.089 | TFLOPs: 30.49 | 7: iteration 1200/ 7508 | consumed samples: 307200 | consumed tokens: 629145600 | elapsed time per iteration (s): 0.29 | learning rate: 1.900E-04 | global batch size: 256 | lm loss: 4.593076E+00 | grad norm: 0.458 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.177 | TFLOPs: 30.50 | 7: iteration 1210/ 7508 | consumed samples: 309760 | consumed tokens: 634388480 | elapsed time per iteration (s): 0.29 | learning rate: 1.898E-04 | global batch size: 256 | lm loss: 4.586834E+00 | grad norm: 0.542 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.503 | TFLOPs: 30.51 | 7: iteration 1220/ 7508 | consumed samples: 312320 | consumed tokens: 639631360 | elapsed time per iteration (s): 0.29 | learning rate: 1.897E-04 | global batch size: 256 | lm loss: 4.573603E+00 | grad norm: 0.488 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.744 | TFLOPs: 30.48 | 7: iteration 1230/ 7508 | consumed samples: 314880 | consumed tokens: 644874240 | elapsed time per iteration (s): 0.29 | learning rate: 1.895E-04 | global batch size: 256 | lm loss: 4.573224E+00 | grad norm: 0.502 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.306 | TFLOPs: 30.50 | 7: iteration 1240/ 7508 | consumed samples: 317440 | consumed tokens: 650117120 | elapsed time per iteration (s): 0.29 | learning rate: 1.893E-04 | global batch size: 256 | lm loss: 4.556868E+00 | grad norm: 0.635 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.355 | TFLOPs: 30.47 | 7: iteration 1250/ 7508 | consumed samples: 320000 | consumed tokens: 655360000 | elapsed time per iteration (s): 0.29 | learning rate: 1.891E-04 | global batch size: 256 | lm loss: 4.557083E+00 | grad norm: 0.470 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.066 | TFLOPs: 30.49 | 7: iteration 1260/ 7508 | consumed samples: 322560 | consumed tokens: 660602880 | elapsed time per iteration (s): 0.29 | learning rate: 1.889E-04 | global batch size: 256 | lm loss: 4.555325E+00 | grad norm: 0.522 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.524 | TFLOPs: 30.51 | 7: iteration 1270/ 7508 | consumed samples: 325120 | consumed tokens: 665845760 | elapsed time per iteration (s): 0.30 | learning rate: 1.888E-04 | global batch size: 256 | lm loss: 4.542716E+00 | grad norm: 0.592 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.058 | TFLOPs: 30.28 | 7: iteration 1280/ 7508 | consumed samples: 327680 | consumed tokens: 671088640 | elapsed time per iteration (s): 0.29 | learning rate: 1.886E-04 | global batch size: 256 | lm loss: 4.536784E+00 | grad norm: 0.516 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.944 | TFLOPs: 30.49 | 7: iteration 1290/ 7508 | consumed samples: 330240 | consumed tokens: 676331520 | elapsed time per iteration (s): 0.29 | learning rate: 1.884E-04 | global batch size: 256 | lm loss: 4.540925E+00 | grad norm: 0.746 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.352 | TFLOPs: 30.50 | 7: iteration 1300/ 7508 | consumed samples: 332800 | consumed tokens: 681574400 | elapsed time per iteration (s): 0.29 | learning rate: 1.882E-04 | global batch size: 256 | lm loss: 4.524805E+00 | grad norm: 0.698 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.243 | TFLOPs: 30.50 | 7: iteration 1310/ 7508 | consumed samples: 335360 | consumed tokens: 686817280 | elapsed time per iteration (s): 0.30 | learning rate: 1.880E-04 | global batch size: 256 | lm loss: 4.524753E+00 | grad norm: 0.542 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.706 | TFLOPs: 30.06 | 7: iteration 1320/ 7508 | consumed samples: 337920 | consumed tokens: 692060160 | elapsed time per iteration (s): 0.30 | learning rate: 1.878E-04 | global batch size: 256 | lm loss: 4.517230E+00 | grad norm: 0.583 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.851 | TFLOPs: 30.21 | 7: iteration 1330/ 7508 | consumed samples: 340480 | consumed tokens: 697303040 | elapsed time per iteration (s): 0.29 | learning rate: 1.876E-04 | global batch size: 256 | lm loss: 4.514744E+00 | grad norm: 0.655 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.939 | TFLOPs: 30.42 | 7: iteration 1340/ 7508 | consumed samples: 343040 | consumed tokens: 702545920 | elapsed time per iteration (s): 0.29 | learning rate: 1.874E-04 | global batch size: 256 | lm loss: 4.503257E+00 | grad norm: 0.580 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.581 | TFLOPs: 30.44 | 7: iteration 1350/ 7508 | consumed samples: 345600 | consumed tokens: 707788800 | elapsed time per iteration (s): 0.30 | learning rate: 1.872E-04 | global batch size: 256 | lm loss: 4.496388E+00 | grad norm: 0.454 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.818 | TFLOPs: 30.06 | 7: iteration 1360/ 7508 | consumed samples: 348160 | consumed tokens: 713031680 | elapsed time per iteration (s): 0.29 | learning rate: 1.871E-04 | global batch size: 256 | lm loss: 4.493895E+00 | grad norm: 0.612 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.146 | TFLOPs: 30.46 | 7: iteration 1370/ 7508 | consumed samples: 350720 | consumed tokens: 718274560 | elapsed time per iteration (s): 0.29 | learning rate: 1.869E-04 | global batch size: 256 | lm loss: 4.491641E+00 | grad norm: 0.465 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.363 | TFLOPs: 30.47 | 7: iteration 1380/ 7508 | consumed samples: 353280 | consumed tokens: 723517440 | elapsed time per iteration (s): 0.30 | learning rate: 1.867E-04 | global batch size: 256 | lm loss: 4.478929E+00 | grad norm: 0.540 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.506 | TFLOPs: 30.05 | 7: iteration 1390/ 7508 | consumed samples: 355840 | consumed tokens: 728760320 | elapsed time per iteration (s): 0.29 | learning rate: 1.865E-04 | global batch size: 256 | lm loss: 4.478838E+00 | grad norm: 0.547 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.181 | TFLOPs: 30.46 | 7: iteration 1400/ 7508 | consumed samples: 358400 | consumed tokens: 734003200 | elapsed time per iteration (s): 0.29 | learning rate: 1.863E-04 | global batch size: 256 | lm loss: 4.474532E+00 | grad norm: 0.629 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.852 | TFLOPs: 30.45 | 7: iteration 1410/ 7508 | consumed samples: 360960 | consumed tokens: 739246080 | elapsed time per iteration (s): 0.29 | learning rate: 1.861E-04 | global batch size: 256 | lm loss: 4.469061E+00 | grad norm: 0.563 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.965 | TFLOPs: 30.46 | 7: iteration 1420/ 7508 | consumed samples: 363520 | consumed tokens: 744488960 | elapsed time per iteration (s): 0.29 | learning rate: 1.858E-04 | global batch size: 256 | lm loss: 4.457317E+00 | grad norm: 0.523 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.883 | TFLOPs: 30.45 | 7: iteration 1430/ 7508 | consumed samples: 366080 | consumed tokens: 749731840 | elapsed time per iteration (s): 0.29 | learning rate: 1.856E-04 | global batch size: 256 | lm loss: 4.457580E+00 | grad norm: 0.884 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.311 | TFLOPs: 30.47 | 7: iteration 1440/ 7508 | consumed samples: 368640 | consumed tokens: 754974720 | elapsed time per iteration (s): 0.29 | learning rate: 1.854E-04 | global batch size: 256 | lm loss: 4.463613E+00 | grad norm: 0.521 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.079 | TFLOPs: 30.49 | 7: iteration 1450/ 7508 | consumed samples: 371200 | consumed tokens: 760217600 | elapsed time per iteration (s): 0.29 | learning rate: 1.852E-04 | global batch size: 256 | lm loss: 4.445622E+00 | grad norm: 0.634 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.023 | TFLOPs: 30.49 | 7: iteration 1460/ 7508 | consumed samples: 373760 | consumed tokens: 765460480 | elapsed time per iteration (s): 0.29 | learning rate: 1.850E-04 | global batch size: 256 | lm loss: 4.445939E+00 | grad norm: 0.504 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.329 | TFLOPs: 30.50 | 7: iteration 1470/ 7508 | consumed samples: 376320 | consumed tokens: 770703360 | elapsed time per iteration (s): 0.29 | learning rate: 1.848E-04 | global batch size: 256 | lm loss: 4.439510E+00 | grad norm: 0.486 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.526 | TFLOPs: 30.47 | 7: iteration 1480/ 7508 | consumed samples: 378880 | consumed tokens: 775946240 | elapsed time per iteration (s): 0.29 | learning rate: 1.846E-04 | global batch size: 256 | lm loss: 4.439457E+00 | grad norm: 0.556 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.444 | TFLOPs: 30.47 | 7: iteration 1490/ 7508 | consumed samples: 381440 | consumed tokens: 781189120 | elapsed time per iteration (s): 0.29 | learning rate: 1.844E-04 | global batch size: 256 | lm loss: 4.435397E+00 | grad norm: 0.487 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.472 | TFLOPs: 30.47 | 7: iteration 1500/ 7508 | consumed samples: 384000 | consumed tokens: 786432000 | elapsed time per iteration (s): 0.30 | learning rate: 1.842E-04 | global batch size: 256 | lm loss: 4.431735E+00 | grad norm: 0.527 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 849.021 | TFLOPs: 29.72 | 7: iteration 1510/ 7508 | consumed samples: 386560 | consumed tokens: 791674880 | elapsed time per iteration (s): 0.29 | learning rate: 1.840E-04 | global batch size: 256 | lm loss: 4.423109E+00 | grad norm: 0.585 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.519 | TFLOPs: 30.47 | 7: iteration 1520/ 7508 | consumed samples: 389120 | consumed tokens: 796917760 | elapsed time per iteration (s): 0.30 | learning rate: 1.837E-04 | global batch size: 256 | lm loss: 4.428006E+00 | grad norm: 0.571 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.443 | TFLOPs: 30.09 | 7: iteration 1530/ 7508 | consumed samples: 391680 | consumed tokens: 802160640 | elapsed time per iteration (s): 0.30 | learning rate: 1.835E-04 | global batch size: 256 | lm loss: 4.407991E+00 | grad norm: 0.786 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.541 | TFLOPs: 30.13 | 7: iteration 1540/ 7508 | consumed samples: 394240 | consumed tokens: 807403520 | elapsed time per iteration (s): 0.29 | learning rate: 1.833E-04 | global batch size: 256 | lm loss: 4.409469E+00 | grad norm: 0.614 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.103 | TFLOPs: 30.49 | 7: iteration 1550/ 7508 | consumed samples: 396800 | consumed tokens: 812646400 | elapsed time per iteration (s): 0.29 | learning rate: 1.831E-04 | global batch size: 256 | lm loss: 4.403147E+00 | grad norm: 0.521 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.599 | TFLOPs: 30.48 | 7: iteration 1560/ 7508 | consumed samples: 399360 | consumed tokens: 817889280 | elapsed time per iteration (s): 0.29 | learning rate: 1.829E-04 | global batch size: 256 | lm loss: 4.394737E+00 | grad norm: 0.591 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.601 | TFLOPs: 30.44 | 7: iteration 1570/ 7508 | consumed samples: 401920 | consumed tokens: 823132160 | elapsed time per iteration (s): 0.29 | learning rate: 1.826E-04 | global batch size: 256 | lm loss: 4.400639E+00 | grad norm: 0.492 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.223 | TFLOPs: 30.50 | 7: iteration 1580/ 7508 | consumed samples: 404480 | consumed tokens: 828375040 | elapsed time per iteration (s): 0.29 | learning rate: 1.824E-04 | global batch size: 256 | lm loss: 4.395721E+00 | grad norm: 0.630 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 871.090 | TFLOPs: 30.49 | 7: iteration 1590/ 7508 | consumed samples: 407040 | consumed tokens: 833617920 | elapsed time per iteration (s): 0.29 | learning rate: 1.822E-04 | global batch size: 256 | lm loss: 4.394864E+00 | grad norm: 0.489 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.837 | TFLOPs: 30.49 | 7: iteration 1600/ 7508 | consumed samples: 409600 | consumed tokens: 838860800 | elapsed time per iteration (s): 0.30 | learning rate: 1.819E-04 | global batch size: 256 | lm loss: 4.378889E+00 | grad norm: 0.655 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.381 | TFLOPs: 30.19 | 7: iteration 1610/ 7508 | consumed samples: 412160 | consumed tokens: 844103680 | elapsed time per iteration (s): 0.29 | learning rate: 1.817E-04 | global batch size: 256 | lm loss: 4.387012E+00 | grad norm: 0.605 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.845 | TFLOPs: 30.45 | 7: iteration 1620/ 7508 | consumed samples: 414720 | consumed tokens: 849346560 | elapsed time per iteration (s): 0.30 | learning rate: 1.815E-04 | global batch size: 256 | lm loss: 4.377020E+00 | grad norm: 0.502 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.502 | TFLOPs: 30.30 | 7: iteration 1630/ 7508 | consumed samples: 417280 | consumed tokens: 854589440 | elapsed time per iteration (s): 0.29 | learning rate: 1.813E-04 | global batch size: 256 | lm loss: 4.373136E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.886 | TFLOPs: 30.42 | 7: iteration 1640/ 7508 | consumed samples: 419840 | consumed tokens: 859832320 | elapsed time per iteration (s): 0.30 | learning rate: 1.810E-04 | global batch size: 256 | lm loss: 4.366875E+00 | grad norm: 0.516 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.875 | TFLOPs: 30.24 | 7: iteration 1650/ 7508 | consumed samples: 422400 | consumed tokens: 865075200 | elapsed time per iteration (s): 0.29 | learning rate: 1.808E-04 | global batch size: 256 | lm loss: 4.362798E+00 | grad norm: 0.498 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.569 | TFLOPs: 30.44 | 7: iteration 1660/ 7508 | consumed samples: 424960 | consumed tokens: 870318080 | elapsed time per iteration (s): 0.29 | learning rate: 1.806E-04 | global batch size: 256 | lm loss: 4.354882E+00 | grad norm: 0.790 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.956 | TFLOPs: 30.42 | 7: iteration 1670/ 7508 | consumed samples: 427520 | consumed tokens: 875560960 | elapsed time per iteration (s): 0.29 | learning rate: 1.803E-04 | global batch size: 256 | lm loss: 4.361714E+00 | grad norm: 0.598 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.147 | TFLOPs: 30.46 | 7: iteration 1680/ 7508 | consumed samples: 430080 | consumed tokens: 880803840 | elapsed time per iteration (s): 0.29 | learning rate: 1.801E-04 | global batch size: 256 | lm loss: 4.356993E+00 | grad norm: 0.504 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.025 | TFLOPs: 30.46 | 7: iteration 1690/ 7508 | consumed samples: 432640 | consumed tokens: 886046720 | elapsed time per iteration (s): 0.29 | learning rate: 1.798E-04 | global batch size: 256 | lm loss: 4.353314E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.249 | TFLOPs: 30.46 | 7: iteration 1700/ 7508 | consumed samples: 435200 | consumed tokens: 891289600 | elapsed time per iteration (s): 0.29 | learning rate: 1.796E-04 | global batch size: 256 | lm loss: 4.339358E+00 | grad norm: 0.613 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.426 | TFLOPs: 30.47 | 7: iteration 1710/ 7508 | consumed samples: 437760 | consumed tokens: 896532480 | elapsed time per iteration (s): 0.30 | learning rate: 1.794E-04 | global batch size: 256 | lm loss: 4.339769E+00 | grad norm: 0.599 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.423 | TFLOPs: 30.30 | 7: iteration 1720/ 7508 | consumed samples: 440320 | consumed tokens: 901775360 | elapsed time per iteration (s): 0.29 | learning rate: 1.791E-04 | global batch size: 256 | lm loss: 4.342563E+00 | grad norm: 0.606 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.296 | TFLOPs: 30.43 | 7: iteration 1730/ 7508 | consumed samples: 442880 | consumed tokens: 907018240 | elapsed time per iteration (s): 0.29 | learning rate: 1.789E-04 | global batch size: 256 | lm loss: 4.329942E+00 | grad norm: 0.490 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.154 | TFLOPs: 30.43 | 7: iteration 1740/ 7508 | consumed samples: 445440 | consumed tokens: 912261120 | elapsed time per iteration (s): 0.29 | learning rate: 1.786E-04 | global batch size: 256 | lm loss: 4.325612E+00 | grad norm: 0.535 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.301 | TFLOPs: 30.43 | 7: iteration 1750/ 7508 | consumed samples: 448000 | consumed tokens: 917504000 | elapsed time per iteration (s): 0.29 | learning rate: 1.784E-04 | global batch size: 256 | lm loss: 4.327345E+00 | grad norm: 0.537 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.886 | TFLOPs: 30.42 | 7: iteration 1760/ 7508 | consumed samples: 450560 | consumed tokens: 922746880 | elapsed time per iteration (s): 0.29 | learning rate: 1.781E-04 | global batch size: 256 | lm loss: 4.321361E+00 | grad norm: 0.471 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.934 | TFLOPs: 30.42 | 7: iteration 1770/ 7508 | consumed samples: 453120 | consumed tokens: 927989760 | elapsed time per iteration (s): 0.29 | learning rate: 1.779E-04 | global batch size: 256 | lm loss: 4.323942E+00 | grad norm: 0.529 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.584 | TFLOPs: 30.41 | 7: iteration 1780/ 7508 | consumed samples: 455680 | consumed tokens: 933232640 | elapsed time per iteration (s): 0.29 | learning rate: 1.776E-04 | global batch size: 256 | lm loss: 4.322340E+00 | grad norm: 0.555 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.560 | TFLOPs: 30.44 | 7: iteration 1790/ 7508 | consumed samples: 458240 | consumed tokens: 938475520 | elapsed time per iteration (s): 0.29 | learning rate: 1.774E-04 | global batch size: 256 | lm loss: 4.321461E+00 | grad norm: 0.564 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.790 | TFLOPs: 30.41 | 7: iteration 1800/ 7508 | consumed samples: 460800 | consumed tokens: 943718400 | elapsed time per iteration (s): 0.29 | learning rate: 1.771E-04 | global batch size: 256 | lm loss: 4.311466E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.430 | TFLOPs: 30.44 | 7: iteration 1810/ 7508 | consumed samples: 463360 | consumed tokens: 948961280 | elapsed time per iteration (s): 0.29 | learning rate: 1.769E-04 | global batch size: 256 | lm loss: 4.306069E+00 | grad norm: 0.553 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.221 | TFLOPs: 30.39 | 7: iteration 1820/ 7508 | consumed samples: 465920 | consumed tokens: 954204160 | elapsed time per iteration (s): 0.29 | learning rate: 1.766E-04 | global batch size: 256 | lm loss: 4.312961E+00 | grad norm: 0.483 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.267 | TFLOPs: 30.43 | 7: iteration 1830/ 7508 | consumed samples: 468480 | consumed tokens: 959447040 | elapsed time per iteration (s): 0.29 | learning rate: 1.764E-04 | global batch size: 256 | lm loss: 4.303714E+00 | grad norm: 0.491 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.058 | TFLOPs: 30.42 | 7: iteration 1840/ 7508 | consumed samples: 471040 | consumed tokens: 964689920 | elapsed time per iteration (s): 0.29 | learning rate: 1.761E-04 | global batch size: 256 | lm loss: 4.299129E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.107 | TFLOPs: 30.39 | 7: iteration 1850/ 7508 | consumed samples: 473600 | consumed tokens: 969932800 | elapsed time per iteration (s): 0.29 | learning rate: 1.758E-04 | global batch size: 256 | lm loss: 4.295840E+00 | grad norm: 0.509 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.240 | TFLOPs: 30.39 | 7: iteration 1860/ 7508 | consumed samples: 476160 | consumed tokens: 975175680 | elapsed time per iteration (s): 0.29 | learning rate: 1.756E-04 | global batch size: 256 | lm loss: 4.296780E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.158 | TFLOPs: 30.43 | 7: iteration 1870/ 7508 | consumed samples: 478720 | consumed tokens: 980418560 | elapsed time per iteration (s): 0.29 | learning rate: 1.753E-04 | global batch size: 256 | lm loss: 4.283944E+00 | grad norm: 0.494 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.158 | TFLOPs: 30.43 | 7: iteration 1880/ 7508 | consumed samples: 481280 | consumed tokens: 985661440 | elapsed time per iteration (s): 0.29 | learning rate: 1.751E-04 | global batch size: 256 | lm loss: 4.286454E+00 | grad norm: 0.535 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.193 | TFLOPs: 30.39 | 7: iteration 1890/ 7508 | consumed samples: 483840 | consumed tokens: 990904320 | elapsed time per iteration (s): 0.30 | learning rate: 1.748E-04 | global batch size: 256 | lm loss: 4.286639E+00 | grad norm: 0.659 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.658 | TFLOPs: 30.06 | 7: iteration 1900/ 7508 | consumed samples: 486400 | consumed tokens: 996147200 | elapsed time per iteration (s): 0.29 | learning rate: 1.745E-04 | global batch size: 256 | lm loss: 4.288359E+00 | grad norm: 0.503 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.078 | TFLOPs: 30.46 | 7: iteration 1910/ 7508 | consumed samples: 488960 | consumed tokens: 1001390080 | elapsed time per iteration (s): 0.29 | learning rate: 1.743E-04 | global batch size: 256 | lm loss: 4.284259E+00 | grad norm: 0.535 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.104 | TFLOPs: 30.42 | 7: iteration 1920/ 7508 | consumed samples: 491520 | consumed tokens: 1006632960 | elapsed time per iteration (s): 0.29 | learning rate: 1.740E-04 | global batch size: 256 | lm loss: 4.272411E+00 | grad norm: 0.524 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.177 | TFLOPs: 30.39 | 7: iteration 1930/ 7508 | consumed samples: 494080 | consumed tokens: 1011875840 | elapsed time per iteration (s): 0.29 | learning rate: 1.737E-04 | global batch size: 256 | lm loss: 4.264643E+00 | grad norm: 0.502 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.493 | TFLOPs: 30.40 | 7: iteration 1940/ 7508 | consumed samples: 496640 | consumed tokens: 1017118720 | elapsed time per iteration (s): 0.30 | learning rate: 1.735E-04 | global batch size: 256 | lm loss: 4.264979E+00 | grad norm: 0.528 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.567 | TFLOPs: 30.37 | 7: iteration 1950/ 7508 | consumed samples: 499200 | consumed tokens: 1022361600 | elapsed time per iteration (s): 0.30 | learning rate: 1.732E-04 | global batch size: 256 | lm loss: 4.270275E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.590 | TFLOPs: 30.34 | 7: iteration 1960/ 7508 | consumed samples: 501760 | consumed tokens: 1027604480 | elapsed time per iteration (s): 0.30 | learning rate: 1.729E-04 | global batch size: 256 | lm loss: 4.267098E+00 | grad norm: 0.575 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.445 | TFLOPs: 30.37 | 7: iteration 1970/ 7508 | consumed samples: 504320 | consumed tokens: 1032847360 | elapsed time per iteration (s): 0.30 | learning rate: 1.726E-04 | global batch size: 256 | lm loss: 4.257481E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.875 | TFLOPs: 30.31 | 7: iteration 1980/ 7508 | consumed samples: 506880 | consumed tokens: 1038090240 | elapsed time per iteration (s): 0.30 | learning rate: 1.724E-04 | global batch size: 256 | lm loss: 4.257095E+00 | grad norm: 0.546 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 851.020 | TFLOPs: 29.79 | 7: iteration 1990/ 7508 | consumed samples: 509440 | consumed tokens: 1043333120 | elapsed time per iteration (s): 0.30 | learning rate: 1.721E-04 | global batch size: 256 | lm loss: 4.262347E+00 | grad norm: 0.691 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 853.870 | TFLOPs: 29.89 | 0: [2023-03-16 23:01:42,005] [INFO] [logging.py:68:log_dist] [Rank 0] step=2000, skipped=0, lr=[0.00017182361507925355, 0.00017182361507925355, 0.00017182361507925355], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 2000/ 7508 | consumed samples: 512000 | consumed tokens: 1048576000 | elapsed time per iteration (s): 0.30 | learning rate: 1.718E-04 | global batch size: 256 | lm loss: 4.252872E+00 | grad norm: 0.566 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.344 | TFLOPs: 30.36 | 0: steps: 2000 loss: 4.2699 iter time (s): 0.305 samples/sec: 839.446 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 2000 | lm loss value: 4.151191E+00 | lm loss PPL: 6.350958E+01 | 7: ----------------------------------------------------------------------------------------------- 7: iteration 2010/ 7508 | consumed samples: 514560 | consumed tokens: 1053818880 | elapsed time per iteration (s): 0.31 | learning rate: 1.715E-04 | global batch size: 256 | lm loss: 4.251309E+00 | grad norm: 0.498 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.690 | TFLOPs: 29.33 | 7: iteration 2020/ 7508 | consumed samples: 517120 | consumed tokens: 1059061760 | elapsed time per iteration (s): 0.29 | learning rate: 1.713E-04 | global batch size: 256 | lm loss: 4.251706E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.734 | TFLOPs: 30.41 | 7: iteration 2030/ 7508 | consumed samples: 519680 | consumed tokens: 1064304640 | elapsed time per iteration (s): 0.29 | learning rate: 1.710E-04 | global batch size: 256 | lm loss: 4.245303E+00 | grad norm: 0.458 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.468 | TFLOPs: 30.44 | 7: iteration 2040/ 7508 | consumed samples: 522240 | consumed tokens: 1069547520 | elapsed time per iteration (s): 0.29 | learning rate: 1.707E-04 | global batch size: 256 | lm loss: 4.241378E+00 | grad norm: 0.526 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.366 | TFLOPs: 30.43 | 7: iteration 2050/ 7508 | consumed samples: 524800 | consumed tokens: 1074790400 | elapsed time per iteration (s): 0.29 | learning rate: 1.704E-04 | global batch size: 256 | lm loss: 4.235048E+00 | grad norm: 0.503 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.216 | TFLOPs: 30.46 | 7: iteration 2060/ 7508 | consumed samples: 527360 | consumed tokens: 1080033280 | elapsed time per iteration (s): 0.29 | learning rate: 1.701E-04 | global batch size: 256 | lm loss: 4.235979E+00 | grad norm: 0.492 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.932 | TFLOPs: 30.45 | 7: iteration 2070/ 7508 | consumed samples: 529920 | consumed tokens: 1085276160 | elapsed time per iteration (s): 0.29 | learning rate: 1.699E-04 | global batch size: 256 | lm loss: 4.236476E+00 | grad norm: 0.532 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.389 | TFLOPs: 30.47 | 7: iteration 2080/ 7508 | consumed samples: 532480 | consumed tokens: 1090519040 | elapsed time per iteration (s): 0.30 | learning rate: 1.696E-04 | global batch size: 256 | lm loss: 4.229067E+00 | grad norm: 0.543 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.323 | TFLOPs: 30.05 | 7: iteration 2090/ 7508 | consumed samples: 535040 | consumed tokens: 1095761920 | elapsed time per iteration (s): 0.29 | learning rate: 1.693E-04 | global batch size: 256 | lm loss: 4.234029E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.098 | TFLOPs: 30.42 | 7: iteration 2100/ 7508 | consumed samples: 537600 | consumed tokens: 1101004800 | elapsed time per iteration (s): 0.30 | learning rate: 1.690E-04 | global batch size: 256 | lm loss: 4.228865E+00 | grad norm: 0.486 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.214 | TFLOPs: 30.32 | 7: iteration 2110/ 7508 | consumed samples: 540160 | consumed tokens: 1106247680 | elapsed time per iteration (s): 0.30 | learning rate: 1.687E-04 | global batch size: 256 | lm loss: 4.226245E+00 | grad norm: 0.531 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.157 | TFLOPs: 30.18 | 7: iteration 2120/ 7508 | consumed samples: 542720 | consumed tokens: 1111490560 | elapsed time per iteration (s): 0.29 | learning rate: 1.684E-04 | global batch size: 256 | lm loss: 4.221554E+00 | grad norm: 0.615 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.389 | TFLOPs: 30.43 | 7: iteration 2130/ 7508 | consumed samples: 545280 | consumed tokens: 1116733440 | elapsed time per iteration (s): 0.29 | learning rate: 1.681E-04 | global batch size: 256 | lm loss: 4.213920E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.979 | TFLOPs: 30.42 | 7: iteration 2140/ 7508 | consumed samples: 547840 | consumed tokens: 1121976320 | elapsed time per iteration (s): 0.29 | learning rate: 1.678E-04 | global batch size: 256 | lm loss: 4.211734E+00 | grad norm: 0.461 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.341 | TFLOPs: 30.43 | 7: iteration 2150/ 7508 | consumed samples: 550400 | consumed tokens: 1127219200 | elapsed time per iteration (s): 0.29 | learning rate: 1.676E-04 | global batch size: 256 | lm loss: 4.217861E+00 | grad norm: 0.486 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.512 | TFLOPs: 30.44 | 7: iteration 2160/ 7508 | consumed samples: 552960 | consumed tokens: 1132462080 | elapsed time per iteration (s): 0.29 | learning rate: 1.673E-04 | global batch size: 256 | lm loss: 4.209661E+00 | grad norm: 0.545 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.843 | TFLOPs: 30.42 | 7: iteration 2170/ 7508 | consumed samples: 555520 | consumed tokens: 1137704960 | elapsed time per iteration (s): 0.29 | learning rate: 1.670E-04 | global batch size: 256 | lm loss: 4.212450E+00 | grad norm: 0.452 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.659 | TFLOPs: 30.44 | 7: iteration 2180/ 7508 | consumed samples: 558080 | consumed tokens: 1142947840 | elapsed time per iteration (s): 0.29 | learning rate: 1.667E-04 | global batch size: 256 | lm loss: 4.209634E+00 | grad norm: 0.527 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.607 | TFLOPs: 30.44 | 7: iteration 2190/ 7508 | consumed samples: 560640 | consumed tokens: 1148190720 | elapsed time per iteration (s): 0.29 | learning rate: 1.664E-04 | global batch size: 256 | lm loss: 4.207815E+00 | grad norm: 0.515 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.915 | TFLOPs: 30.45 | 7: iteration 2200/ 7508 | consumed samples: 563200 | consumed tokens: 1153433600 | elapsed time per iteration (s): 0.29 | learning rate: 1.661E-04 | global batch size: 256 | lm loss: 4.205397E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.596 | TFLOPs: 30.44 | 7: iteration 2210/ 7508 | consumed samples: 565760 | consumed tokens: 1158676480 | elapsed time per iteration (s): 0.29 | learning rate: 1.658E-04 | global batch size: 256 | lm loss: 4.197631E+00 | grad norm: 0.513 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.550 | TFLOPs: 30.44 | 7: iteration 2220/ 7508 | consumed samples: 568320 | consumed tokens: 1163919360 | elapsed time per iteration (s): 0.30 | learning rate: 1.655E-04 | global batch size: 256 | lm loss: 4.199112E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.830 | TFLOPs: 30.21 | 7: iteration 2230/ 7508 | consumed samples: 570880 | consumed tokens: 1169162240 | elapsed time per iteration (s): 0.29 | learning rate: 1.652E-04 | global batch size: 256 | lm loss: 4.202597E+00 | grad norm: 0.567 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.600 | TFLOPs: 30.44 | 7: iteration 2240/ 7508 | consumed samples: 573440 | consumed tokens: 1174405120 | elapsed time per iteration (s): 0.29 | learning rate: 1.649E-04 | global batch size: 256 | lm loss: 4.195391E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.667 | TFLOPs: 30.44 | 7: iteration 2250/ 7508 | consumed samples: 576000 | consumed tokens: 1179648000 | elapsed time per iteration (s): 0.29 | learning rate: 1.646E-04 | global batch size: 256 | lm loss: 4.193999E+00 | grad norm: 0.517 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.615 | TFLOPs: 30.44 | 7: iteration 2260/ 7508 | consumed samples: 578560 | consumed tokens: 1184890880 | elapsed time per iteration (s): 0.29 | learning rate: 1.643E-04 | global batch size: 256 | lm loss: 4.197237E+00 | grad norm: 0.576 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.366 | TFLOPs: 30.43 | 7: iteration 2270/ 7508 | consumed samples: 581120 | consumed tokens: 1190133760 | elapsed time per iteration (s): 0.29 | learning rate: 1.640E-04 | global batch size: 256 | lm loss: 4.188982E+00 | grad norm: 0.449 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.431 | TFLOPs: 30.44 | 7: iteration 2280/ 7508 | consumed samples: 583680 | consumed tokens: 1195376640 | elapsed time per iteration (s): 0.29 | learning rate: 1.637E-04 | global batch size: 256 | lm loss: 4.196530E+00 | grad norm: 0.528 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.898 | TFLOPs: 30.45 | 7: iteration 2290/ 7508 | consumed samples: 586240 | consumed tokens: 1200619520 | elapsed time per iteration (s): 0.30 | learning rate: 1.634E-04 | global batch size: 256 | lm loss: 4.179589E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.235 | TFLOPs: 29.97 | 7: iteration 2300/ 7508 | consumed samples: 588800 | consumed tokens: 1205862400 | elapsed time per iteration (s): 0.29 | learning rate: 1.631E-04 | global batch size: 256 | lm loss: 4.175238E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.904 | TFLOPs: 30.45 | 7: iteration 2310/ 7508 | consumed samples: 591360 | consumed tokens: 1211105280 | elapsed time per iteration (s): 0.29 | learning rate: 1.627E-04 | global batch size: 256 | lm loss: 4.179284E+00 | grad norm: 0.512 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.094 | TFLOPs: 30.42 | 7: iteration 2320/ 7508 | consumed samples: 593920 | consumed tokens: 1216348160 | elapsed time per iteration (s): 0.29 | learning rate: 1.624E-04 | global batch size: 256 | lm loss: 4.182033E+00 | grad norm: 0.471 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.062 | TFLOPs: 30.42 | 7: iteration 2330/ 7508 | consumed samples: 596480 | consumed tokens: 1221591040 | elapsed time per iteration (s): 0.29 | learning rate: 1.621E-04 | global batch size: 256 | lm loss: 4.174355E+00 | grad norm: 0.460 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.063 | TFLOPs: 30.42 | 7: iteration 2340/ 7508 | consumed samples: 599040 | consumed tokens: 1226833920 | elapsed time per iteration (s): 0.29 | learning rate: 1.618E-04 | global batch size: 256 | lm loss: 4.170736E+00 | grad norm: 0.473 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.398 | TFLOPs: 30.40 | 7: iteration 2350/ 7508 | consumed samples: 601600 | consumed tokens: 1232076800 | elapsed time per iteration (s): 0.29 | learning rate: 1.615E-04 | global batch size: 256 | lm loss: 4.166614E+00 | grad norm: 0.506 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.519 | TFLOPs: 30.44 | 7: iteration 2360/ 7508 | consumed samples: 604160 | consumed tokens: 1237319680 | elapsed time per iteration (s): 0.29 | learning rate: 1.612E-04 | global batch size: 256 | lm loss: 4.169395E+00 | grad norm: 0.496 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.508 | TFLOPs: 30.44 | 7: iteration 2370/ 7508 | consumed samples: 606720 | consumed tokens: 1242562560 | elapsed time per iteration (s): 0.29 | learning rate: 1.609E-04 | global batch size: 256 | lm loss: 4.164606E+00 | grad norm: 0.604 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.580 | TFLOPs: 30.44 | 7: iteration 2380/ 7508 | consumed samples: 609280 | consumed tokens: 1247805440 | elapsed time per iteration (s): 0.29 | learning rate: 1.606E-04 | global batch size: 256 | lm loss: 4.162442E+00 | grad norm: 0.459 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.140 | TFLOPs: 30.43 | 7: iteration 2390/ 7508 | consumed samples: 611840 | consumed tokens: 1253048320 | elapsed time per iteration (s): 0.30 | learning rate: 1.603E-04 | global batch size: 256 | lm loss: 4.164529E+00 | grad norm: 0.455 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.991 | TFLOPs: 30.11 | 7: iteration 2400/ 7508 | consumed samples: 614400 | consumed tokens: 1258291200 | elapsed time per iteration (s): 0.29 | learning rate: 1.599E-04 | global batch size: 256 | lm loss: 4.161348E+00 | grad norm: 0.549 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.389 | TFLOPs: 30.43 | 7: iteration 2410/ 7508 | consumed samples: 616960 | consumed tokens: 1263534080 | elapsed time per iteration (s): 0.29 | learning rate: 1.596E-04 | global batch size: 256 | lm loss: 4.159683E+00 | grad norm: 0.467 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.584 | TFLOPs: 30.44 | 7: iteration 2420/ 7508 | consumed samples: 619520 | consumed tokens: 1268776960 | elapsed time per iteration (s): 0.29 | learning rate: 1.593E-04 | global batch size: 256 | lm loss: 4.157463E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.268 | TFLOPs: 30.43 | 7: iteration 2430/ 7508 | consumed samples: 622080 | consumed tokens: 1274019840 | elapsed time per iteration (s): 0.29 | learning rate: 1.590E-04 | global batch size: 256 | lm loss: 4.157110E+00 | grad norm: 0.486 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.407 | TFLOPs: 30.44 | 7: iteration 2440/ 7508 | consumed samples: 624640 | consumed tokens: 1279262720 | elapsed time per iteration (s): 0.29 | learning rate: 1.587E-04 | global batch size: 256 | lm loss: 4.155990E+00 | grad norm: 0.587 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.393 | TFLOPs: 30.44 | 7: iteration 2450/ 7508 | consumed samples: 627200 | consumed tokens: 1284505600 | elapsed time per iteration (s): 0.29 | learning rate: 1.583E-04 | global batch size: 256 | lm loss: 4.148820E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.986 | TFLOPs: 30.46 | 7: iteration 2460/ 7508 | consumed samples: 629760 | consumed tokens: 1289748480 | elapsed time per iteration (s): 0.29 | learning rate: 1.580E-04 | global batch size: 256 | lm loss: 4.153844E+00 | grad norm: 0.498 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.927 | TFLOPs: 30.45 | 7: iteration 2470/ 7508 | consumed samples: 632320 | consumed tokens: 1294991360 | elapsed time per iteration (s): 0.29 | learning rate: 1.577E-04 | global batch size: 256 | lm loss: 4.145110E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.589 | TFLOPs: 30.44 | 7: iteration 2480/ 7508 | consumed samples: 634880 | consumed tokens: 1300234240 | elapsed time per iteration (s): 0.29 | learning rate: 1.574E-04 | global batch size: 256 | lm loss: 4.146311E+00 | grad norm: 0.482 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.727 | TFLOPs: 30.45 | 7: iteration 2490/ 7508 | consumed samples: 637440 | consumed tokens: 1305477120 | elapsed time per iteration (s): 0.29 | learning rate: 1.571E-04 | global batch size: 256 | lm loss: 4.135750E+00 | grad norm: 0.508 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.206 | TFLOPs: 30.46 | 7: iteration 2500/ 7508 | consumed samples: 640000 | consumed tokens: 1310720000 | elapsed time per iteration (s): 0.29 | learning rate: 1.567E-04 | global batch size: 256 | lm loss: 4.136681E+00 | grad norm: 0.562 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.806 | TFLOPs: 30.45 | 7: iteration 2510/ 7508 | consumed samples: 642560 | consumed tokens: 1315962880 | elapsed time per iteration (s): 0.30 | learning rate: 1.564E-04 | global batch size: 256 | lm loss: 4.144706E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 861.275 | TFLOPs: 30.15 | 7: iteration 2520/ 7508 | consumed samples: 645120 | consumed tokens: 1321205760 | elapsed time per iteration (s): 0.29 | learning rate: 1.561E-04 | global batch size: 256 | lm loss: 4.136156E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.313 | TFLOPs: 30.43 | 7: iteration 2530/ 7508 | consumed samples: 647680 | consumed tokens: 1326448640 | elapsed time per iteration (s): 0.29 | learning rate: 1.558E-04 | global batch size: 256 | lm loss: 4.132880E+00 | grad norm: 0.488 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.975 | TFLOPs: 30.46 | 7: iteration 2540/ 7508 | consumed samples: 650240 | consumed tokens: 1331691520 | elapsed time per iteration (s): 0.29 | learning rate: 1.554E-04 | global batch size: 256 | lm loss: 4.125578E+00 | grad norm: 0.475 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.638 | TFLOPs: 30.44 | 7: iteration 2550/ 7508 | consumed samples: 652800 | consumed tokens: 1336934400 | elapsed time per iteration (s): 0.29 | learning rate: 1.551E-04 | global batch size: 256 | lm loss: 4.128904E+00 | grad norm: 0.495 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.195 | TFLOPs: 30.46 | 7: iteration 2560/ 7508 | consumed samples: 655360 | consumed tokens: 1342177280 | elapsed time per iteration (s): 0.29 | learning rate: 1.548E-04 | global batch size: 256 | lm loss: 4.123451E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.695 | TFLOPs: 30.45 | 7: iteration 2570/ 7508 | consumed samples: 657920 | consumed tokens: 1347420160 | elapsed time per iteration (s): 0.29 | learning rate: 1.544E-04 | global batch size: 256 | lm loss: 4.130075E+00 | grad norm: 0.494 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.639 | TFLOPs: 30.44 | 7: iteration 2580/ 7508 | consumed samples: 660480 | consumed tokens: 1352663040 | elapsed time per iteration (s): 0.30 | learning rate: 1.541E-04 | global batch size: 256 | lm loss: 4.125323E+00 | grad norm: 0.513 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 848.840 | TFLOPs: 29.72 | 7: iteration 2590/ 7508 | consumed samples: 663040 | consumed tokens: 1357905920 | elapsed time per iteration (s): 0.29 | learning rate: 1.538E-04 | global batch size: 256 | lm loss: 4.125738E+00 | grad norm: 0.481 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.908 | TFLOPs: 30.45 | 7: iteration 2600/ 7508 | consumed samples: 665600 | consumed tokens: 1363148800 | elapsed time per iteration (s): 0.30 | learning rate: 1.534E-04 | global batch size: 256 | lm loss: 4.128444E+00 | grad norm: 0.545 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 854.243 | TFLOPs: 29.90 | 7: iteration 2610/ 7508 | consumed samples: 668160 | consumed tokens: 1368391680 | elapsed time per iteration (s): 0.29 | learning rate: 1.531E-04 | global batch size: 256 | lm loss: 4.115340E+00 | grad norm: 0.500 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.109 | TFLOPs: 30.43 | 7: iteration 2620/ 7508 | consumed samples: 670720 | consumed tokens: 1373634560 | elapsed time per iteration (s): 0.29 | learning rate: 1.528E-04 | global batch size: 256 | lm loss: 4.125539E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.353 | TFLOPs: 30.43 | 7: iteration 2630/ 7508 | consumed samples: 673280 | consumed tokens: 1378877440 | elapsed time per iteration (s): 0.29 | learning rate: 1.524E-04 | global batch size: 256 | lm loss: 4.107694E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.682 | TFLOPs: 30.45 | 7: iteration 2640/ 7508 | consumed samples: 675840 | consumed tokens: 1384120320 | elapsed time per iteration (s): 0.29 | learning rate: 1.521E-04 | global batch size: 256 | lm loss: 4.108725E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.481 | TFLOPs: 30.44 | 7: iteration 2650/ 7508 | consumed samples: 678400 | consumed tokens: 1389363200 | elapsed time per iteration (s): 0.29 | learning rate: 1.518E-04 | global batch size: 256 | lm loss: 4.113191E+00 | grad norm: 0.487 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.515 | TFLOPs: 30.44 | 7: iteration 2660/ 7508 | consumed samples: 680960 | consumed tokens: 1394606080 | elapsed time per iteration (s): 0.29 | learning rate: 1.514E-04 | global batch size: 256 | lm loss: 4.109768E+00 | grad norm: 0.528 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.782 | TFLOPs: 30.41 | 7: iteration 2670/ 7508 | consumed samples: 683520 | consumed tokens: 1399848960 | elapsed time per iteration (s): 0.29 | learning rate: 1.511E-04 | global batch size: 256 | lm loss: 4.107101E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.023 | TFLOPs: 30.42 | 7: iteration 2680/ 7508 | consumed samples: 686080 | consumed tokens: 1405091840 | elapsed time per iteration (s): 0.29 | learning rate: 1.507E-04 | global batch size: 256 | lm loss: 4.101108E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.064 | TFLOPs: 30.42 | 7: iteration 2690/ 7508 | consumed samples: 688640 | consumed tokens: 1410334720 | elapsed time per iteration (s): 0.29 | learning rate: 1.504E-04 | global batch size: 256 | lm loss: 4.107174E+00 | grad norm: 0.472 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.515 | TFLOPs: 30.44 | 7: iteration 2700/ 7508 | consumed samples: 691200 | consumed tokens: 1415577600 | elapsed time per iteration (s): 0.29 | learning rate: 1.501E-04 | global batch size: 256 | lm loss: 4.096558E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.411 | TFLOPs: 30.44 | 7: iteration 2710/ 7508 | consumed samples: 693760 | consumed tokens: 1420820480 | elapsed time per iteration (s): 0.29 | learning rate: 1.497E-04 | global batch size: 256 | lm loss: 4.094380E+00 | grad norm: 0.474 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.445 | TFLOPs: 30.44 | 7: iteration 2720/ 7508 | consumed samples: 696320 | consumed tokens: 1426063360 | elapsed time per iteration (s): 0.29 | learning rate: 1.494E-04 | global batch size: 256 | lm loss: 4.097377E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.513 | TFLOPs: 30.44 | 7: iteration 2730/ 7508 | consumed samples: 698880 | consumed tokens: 1431306240 | elapsed time per iteration (s): 0.29 | learning rate: 1.490E-04 | global batch size: 256 | lm loss: 4.106512E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.496 | TFLOPs: 30.44 | 7: iteration 2740/ 7508 | consumed samples: 701440 | consumed tokens: 1436549120 | elapsed time per iteration (s): 0.30 | learning rate: 1.487E-04 | global batch size: 256 | lm loss: 4.096748E+00 | grad norm: 0.466 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.400 | TFLOPs: 30.23 | 7: iteration 2750/ 7508 | consumed samples: 704000 | consumed tokens: 1441792000 | elapsed time per iteration (s): 0.29 | learning rate: 1.484E-04 | global batch size: 256 | lm loss: 4.095808E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.355 | TFLOPs: 30.47 | 7: iteration 2760/ 7508 | consumed samples: 706560 | consumed tokens: 1447034880 | elapsed time per iteration (s): 0.29 | learning rate: 1.480E-04 | global batch size: 256 | lm loss: 4.095375E+00 | grad norm: 0.519 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.030 | TFLOPs: 30.46 | 7: iteration 2770/ 7508 | consumed samples: 709120 | consumed tokens: 1452277760 | elapsed time per iteration (s): 0.29 | learning rate: 1.477E-04 | global batch size: 256 | lm loss: 4.087524E+00 | grad norm: 0.480 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.929 | TFLOPs: 30.45 | 7: iteration 2780/ 7508 | consumed samples: 711680 | consumed tokens: 1457520640 | elapsed time per iteration (s): 0.29 | learning rate: 1.473E-04 | global batch size: 256 | lm loss: 4.092260E+00 | grad norm: 0.521 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.196 | TFLOPs: 30.46 | 7: iteration 2790/ 7508 | consumed samples: 714240 | consumed tokens: 1462763520 | elapsed time per iteration (s): 0.29 | learning rate: 1.470E-04 | global batch size: 256 | lm loss: 4.087292E+00 | grad norm: 0.458 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.461 | TFLOPs: 30.44 | 7: iteration 2800/ 7508 | consumed samples: 716800 | consumed tokens: 1468006400 | elapsed time per iteration (s): 0.30 | learning rate: 1.466E-04 | global batch size: 256 | lm loss: 4.077869E+00 | grad norm: 0.489 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.847 | TFLOPs: 30.07 | 7: iteration 2810/ 7508 | consumed samples: 719360 | consumed tokens: 1473249280 | elapsed time per iteration (s): 0.29 | learning rate: 1.463E-04 | global batch size: 256 | lm loss: 4.087189E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.711 | TFLOPs: 30.45 | 7: iteration 2820/ 7508 | consumed samples: 721920 | consumed tokens: 1478492160 | elapsed time per iteration (s): 0.29 | learning rate: 1.459E-04 | global batch size: 256 | lm loss: 4.083091E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.883 | TFLOPs: 30.45 | 7: iteration 2830/ 7508 | consumed samples: 724480 | consumed tokens: 1483735040 | elapsed time per iteration (s): 0.29 | learning rate: 1.456E-04 | global batch size: 256 | lm loss: 4.077601E+00 | grad norm: 0.490 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.523 | TFLOPs: 30.44 | 7: iteration 2840/ 7508 | consumed samples: 727040 | consumed tokens: 1488977920 | elapsed time per iteration (s): 0.29 | learning rate: 1.452E-04 | global batch size: 256 | lm loss: 4.079765E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.688 | TFLOPs: 30.45 | 7: iteration 2850/ 7508 | consumed samples: 729600 | consumed tokens: 1494220800 | elapsed time per iteration (s): 0.29 | learning rate: 1.449E-04 | global batch size: 256 | lm loss: 4.082687E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.209 | TFLOPs: 30.43 | 7: iteration 2860/ 7508 | consumed samples: 732160 | consumed tokens: 1499463680 | elapsed time per iteration (s): 0.29 | learning rate: 1.445E-04 | global batch size: 256 | lm loss: 4.075609E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.553 | TFLOPs: 30.44 | 7: iteration 2870/ 7508 | consumed samples: 734720 | consumed tokens: 1504706560 | elapsed time per iteration (s): 0.29 | learning rate: 1.442E-04 | global batch size: 256 | lm loss: 4.077066E+00 | grad norm: 0.466 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.680 | TFLOPs: 30.45 | 7: iteration 2880/ 7508 | consumed samples: 737280 | consumed tokens: 1509949440 | elapsed time per iteration (s): 0.29 | learning rate: 1.438E-04 | global batch size: 256 | lm loss: 4.069281E+00 | grad norm: 0.581 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.427 | TFLOPs: 30.44 | 7: iteration 2890/ 7508 | consumed samples: 739840 | consumed tokens: 1515192320 | elapsed time per iteration (s): 0.29 | learning rate: 1.435E-04 | global batch size: 256 | lm loss: 4.072545E+00 | grad norm: 0.466 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.723 | TFLOPs: 30.45 | 7: iteration 2900/ 7508 | consumed samples: 742400 | consumed tokens: 1520435200 | elapsed time per iteration (s): 0.29 | learning rate: 1.431E-04 | global batch size: 256 | lm loss: 4.072614E+00 | grad norm: 0.556 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.920 | TFLOPs: 30.45 | 7: iteration 2910/ 7508 | consumed samples: 744960 | consumed tokens: 1525678080 | elapsed time per iteration (s): 0.29 | learning rate: 1.428E-04 | global batch size: 256 | lm loss: 4.069069E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.171 | TFLOPs: 30.46 | 7: iteration 2920/ 7508 | consumed samples: 747520 | consumed tokens: 1530920960 | elapsed time per iteration (s): 0.29 | learning rate: 1.424E-04 | global batch size: 256 | lm loss: 4.061108E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.553 | TFLOPs: 30.48 | 7: iteration 2930/ 7508 | consumed samples: 750080 | consumed tokens: 1536163840 | elapsed time per iteration (s): 0.29 | learning rate: 1.421E-04 | global batch size: 256 | lm loss: 4.056036E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.896 | TFLOPs: 30.45 | 7: iteration 2940/ 7508 | consumed samples: 752640 | consumed tokens: 1541406720 | elapsed time per iteration (s): 0.29 | learning rate: 1.417E-04 | global batch size: 256 | lm loss: 4.070029E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.906 | TFLOPs: 30.45 | 7: iteration 2950/ 7508 | consumed samples: 755200 | consumed tokens: 1546649600 | elapsed time per iteration (s): 0.29 | learning rate: 1.413E-04 | global batch size: 256 | lm loss: 4.062902E+00 | grad norm: 0.436 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.647 | TFLOPs: 30.44 | 7: iteration 2960/ 7508 | consumed samples: 757760 | consumed tokens: 1551892480 | elapsed time per iteration (s): 0.29 | learning rate: 1.410E-04 | global batch size: 256 | lm loss: 4.055090E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.000 | TFLOPs: 30.46 | 7: iteration 2970/ 7508 | consumed samples: 760320 | consumed tokens: 1557135360 | elapsed time per iteration (s): 0.29 | learning rate: 1.406E-04 | global batch size: 256 | lm loss: 4.057692E+00 | grad norm: 0.472 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.195 | TFLOPs: 30.43 | 7: iteration 2980/ 7508 | consumed samples: 762880 | consumed tokens: 1562378240 | elapsed time per iteration (s): 0.29 | learning rate: 1.403E-04 | global batch size: 256 | lm loss: 4.064543E+00 | grad norm: 0.496 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.179 | TFLOPs: 30.43 | 7: iteration 2990/ 7508 | consumed samples: 765440 | consumed tokens: 1567621120 | elapsed time per iteration (s): 0.29 | learning rate: 1.399E-04 | global batch size: 256 | lm loss: 4.056173E+00 | grad norm: 0.492 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.356 | TFLOPs: 30.43 | 7: iteration 3000/ 7508 | consumed samples: 768000 | consumed tokens: 1572864000 | elapsed time per iteration (s): 0.29 | learning rate: 1.396E-04 | global batch size: 256 | lm loss: 4.052768E+00 | grad norm: 0.474 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.393 | TFLOPs: 30.47 | 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 3000 | lm loss value: 4.005388E+00 | lm loss PPL: 5.489313E+01 | 7: ----------------------------------------------------------------------------------------------- 7: iteration 3010/ 7508 | consumed samples: 770560 | consumed tokens: 1578106880 | elapsed time per iteration (s): 0.30 | learning rate: 1.392E-04 | global batch size: 256 | lm loss: 4.049065E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 844.509 | TFLOPs: 29.56 | 7: iteration 3020/ 7508 | consumed samples: 773120 | consumed tokens: 1583349760 | elapsed time per iteration (s): 0.29 | learning rate: 1.388E-04 | global batch size: 256 | lm loss: 4.048225E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.082 | TFLOPs: 30.46 | 7: iteration 3030/ 7508 | consumed samples: 775680 | consumed tokens: 1588592640 | elapsed time per iteration (s): 0.29 | learning rate: 1.385E-04 | global batch size: 256 | lm loss: 4.052283E+00 | grad norm: 0.460 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.330 | TFLOPs: 30.43 | 7: iteration 3040/ 7508 | consumed samples: 778240 | consumed tokens: 1593835520 | elapsed time per iteration (s): 0.29 | learning rate: 1.381E-04 | global batch size: 256 | lm loss: 4.046050E+00 | grad norm: 0.473 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.243 | TFLOPs: 30.43 | 7: iteration 3050/ 7508 | consumed samples: 780800 | consumed tokens: 1599078400 | elapsed time per iteration (s): 0.29 | learning rate: 1.378E-04 | global batch size: 256 | lm loss: 4.051793E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.657 | TFLOPs: 30.44 | 7: iteration 3060/ 7508 | consumed samples: 783360 | consumed tokens: 1604321280 | elapsed time per iteration (s): 0.29 | learning rate: 1.374E-04 | global batch size: 256 | lm loss: 4.039203E+00 | grad norm: 0.489 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.360 | TFLOPs: 30.43 | 7: iteration 3070/ 7508 | consumed samples: 785920 | consumed tokens: 1609564160 | elapsed time per iteration (s): 0.29 | learning rate: 1.370E-04 | global batch size: 256 | lm loss: 4.049268E+00 | grad norm: 0.481 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.749 | TFLOPs: 30.45 | 7: iteration 3080/ 7508 | consumed samples: 788480 | consumed tokens: 1614807040 | elapsed time per iteration (s): 0.29 | learning rate: 1.367E-04 | global batch size: 256 | lm loss: 4.042397E+00 | grad norm: 0.454 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.279 | TFLOPs: 30.43 | 7: iteration 3090/ 7508 | consumed samples: 791040 | consumed tokens: 1620049920 | elapsed time per iteration (s): 0.30 | learning rate: 1.363E-04 | global batch size: 256 | lm loss: 4.040377E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.839 | TFLOPs: 30.10 | 7: iteration 3100/ 7508 | consumed samples: 793600 | consumed tokens: 1625292800 | elapsed time per iteration (s): 0.30 | learning rate: 1.359E-04 | global batch size: 256 | lm loss: 4.040835E+00 | grad norm: 0.451 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 841.915 | TFLOPs: 29.47 | 7: iteration 3110/ 7508 | consumed samples: 796160 | consumed tokens: 1630535680 | elapsed time per iteration (s): 0.29 | learning rate: 1.356E-04 | global batch size: 256 | lm loss: 4.039669E+00 | grad norm: 0.485 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.653 | TFLOPs: 30.44 | 7: iteration 3120/ 7508 | consumed samples: 798720 | consumed tokens: 1635778560 | elapsed time per iteration (s): 0.29 | learning rate: 1.352E-04 | global batch size: 256 | lm loss: 4.033778E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.120 | TFLOPs: 30.43 | 7: iteration 3130/ 7508 | consumed samples: 801280 | consumed tokens: 1641021440 | elapsed time per iteration (s): 0.29 | learning rate: 1.348E-04 | global batch size: 256 | lm loss: 4.036861E+00 | grad norm: 0.449 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.740 | TFLOPs: 30.41 | 7: iteration 3140/ 7508 | consumed samples: 803840 | consumed tokens: 1646264320 | elapsed time per iteration (s): 0.29 | learning rate: 1.345E-04 | global batch size: 256 | lm loss: 4.037279E+00 | grad norm: 0.477 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.031 | TFLOPs: 30.42 | 7: iteration 3150/ 7508 | consumed samples: 806400 | consumed tokens: 1651507200 | elapsed time per iteration (s): 0.29 | learning rate: 1.341E-04 | global batch size: 256 | lm loss: 4.032407E+00 | grad norm: 0.457 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.483 | TFLOPs: 30.40 | 7: iteration 3160/ 7508 | consumed samples: 808960 | consumed tokens: 1656750080 | elapsed time per iteration (s): 0.30 | learning rate: 1.337E-04 | global batch size: 256 | lm loss: 4.030901E+00 | grad norm: 0.466 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.059 | TFLOPs: 30.28 | 7: iteration 3170/ 7508 | consumed samples: 811520 | consumed tokens: 1661992960 | elapsed time per iteration (s): 0.31 | learning rate: 1.334E-04 | global batch size: 256 | lm loss: 4.032357E+00 | grad norm: 0.457 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 815.517 | TFLOPs: 28.55 | 7: iteration 3180/ 7508 | consumed samples: 814080 | consumed tokens: 1667235840 | elapsed time per iteration (s): 0.30 | learning rate: 1.330E-04 | global batch size: 256 | lm loss: 4.024559E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.154 | TFLOPs: 30.01 | 7: iteration 3190/ 7508 | consumed samples: 816640 | consumed tokens: 1672478720 | elapsed time per iteration (s): 0.29 | learning rate: 1.326E-04 | global batch size: 256 | lm loss: 4.030912E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.581 | TFLOPs: 30.41 | 7: iteration 3200/ 7508 | consumed samples: 819200 | consumed tokens: 1677721600 | elapsed time per iteration (s): 0.29 | learning rate: 1.323E-04 | global batch size: 256 | lm loss: 4.031844E+00 | grad norm: 0.467 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.251 | TFLOPs: 30.40 | 7: iteration 3210/ 7508 | consumed samples: 821760 | consumed tokens: 1682964480 | elapsed time per iteration (s): 0.29 | learning rate: 1.319E-04 | global batch size: 256 | lm loss: 4.023493E+00 | grad norm: 0.471 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.420 | TFLOPs: 30.40 | 7: iteration 3220/ 7508 | consumed samples: 824320 | consumed tokens: 1688207360 | elapsed time per iteration (s): 0.29 | learning rate: 1.315E-04 | global batch size: 256 | lm loss: 4.025089E+00 | grad norm: 0.463 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.340 | TFLOPs: 30.40 | 7: iteration 3230/ 7508 | consumed samples: 826880 | consumed tokens: 1693450240 | elapsed time per iteration (s): 0.29 | learning rate: 1.312E-04 | global batch size: 256 | lm loss: 4.022142E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.316 | TFLOPs: 30.40 | 7: iteration 3240/ 7508 | consumed samples: 829440 | consumed tokens: 1698693120 | elapsed time per iteration (s): 0.29 | learning rate: 1.308E-04 | global batch size: 256 | lm loss: 4.024791E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.970 | TFLOPs: 30.39 | 7: iteration 3250/ 7508 | consumed samples: 832000 | consumed tokens: 1703936000 | elapsed time per iteration (s): 0.29 | learning rate: 1.304E-04 | global batch size: 256 | lm loss: 4.023072E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.603 | TFLOPs: 30.41 | 7: iteration 3260/ 7508 | consumed samples: 834560 | consumed tokens: 1709178880 | elapsed time per iteration (s): 0.29 | learning rate: 1.301E-04 | global batch size: 256 | lm loss: 4.009949E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.383 | TFLOPs: 30.40 | 7: iteration 3270/ 7508 | consumed samples: 837120 | consumed tokens: 1714421760 | elapsed time per iteration (s): 0.30 | learning rate: 1.297E-04 | global batch size: 256 | lm loss: 4.019357E+00 | grad norm: 0.465 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.224 | TFLOPs: 30.11 | 7: iteration 3280/ 7508 | consumed samples: 839680 | consumed tokens: 1719664640 | elapsed time per iteration (s): 0.29 | learning rate: 1.293E-04 | global batch size: 256 | lm loss: 4.007081E+00 | grad norm: 0.461 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.614 | TFLOPs: 30.41 | 7: iteration 3290/ 7508 | consumed samples: 842240 | consumed tokens: 1724907520 | elapsed time per iteration (s): 0.29 | learning rate: 1.289E-04 | global batch size: 256 | lm loss: 4.012191E+00 | grad norm: 0.478 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.632 | TFLOPs: 30.41 | 7: iteration 3300/ 7508 | consumed samples: 844800 | consumed tokens: 1730150400 | elapsed time per iteration (s): 0.29 | learning rate: 1.286E-04 | global batch size: 256 | lm loss: 4.014669E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.755 | TFLOPs: 30.41 | 7: iteration 3310/ 7508 | consumed samples: 847360 | consumed tokens: 1735393280 | elapsed time per iteration (s): 0.29 | learning rate: 1.282E-04 | global batch size: 256 | lm loss: 4.011848E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.023 | TFLOPs: 30.42 | 7: iteration 3320/ 7508 | consumed samples: 849920 | consumed tokens: 1740636160 | elapsed time per iteration (s): 0.29 | learning rate: 1.278E-04 | global batch size: 256 | lm loss: 4.016106E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.563 | TFLOPs: 30.41 | 7: iteration 3330/ 7508 | consumed samples: 852480 | consumed tokens: 1745879040 | elapsed time per iteration (s): 0.29 | learning rate: 1.275E-04 | global batch size: 256 | lm loss: 4.007333E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.474 | TFLOPs: 30.40 | 7: iteration 3340/ 7508 | consumed samples: 855040 | consumed tokens: 1751121920 | elapsed time per iteration (s): 0.29 | learning rate: 1.271E-04 | global batch size: 256 | lm loss: 4.007884E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.673 | TFLOPs: 30.41 | 7: iteration 3350/ 7508 | consumed samples: 857600 | consumed tokens: 1756364800 | elapsed time per iteration (s): 0.30 | learning rate: 1.267E-04 | global batch size: 256 | lm loss: 4.004427E+00 | grad norm: 0.461 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.661 | TFLOPs: 30.37 | 7: iteration 3360/ 7508 | consumed samples: 860160 | consumed tokens: 1761607680 | elapsed time per iteration (s): 0.30 | learning rate: 1.263E-04 | global batch size: 256 | lm loss: 4.004444E+00 | grad norm: 0.455 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 843.204 | TFLOPs: 29.52 | 7: iteration 3370/ 7508 | consumed samples: 862720 | consumed tokens: 1766850560 | elapsed time per iteration (s): 0.30 | learning rate: 1.260E-04 | global batch size: 256 | lm loss: 4.003691E+00 | grad norm: 0.477 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.029 | TFLOPs: 30.04 | 7: iteration 3380/ 7508 | consumed samples: 865280 | consumed tokens: 1772093440 | elapsed time per iteration (s): 0.29 | learning rate: 1.256E-04 | global batch size: 256 | lm loss: 4.001436E+00 | grad norm: 0.554 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.772 | TFLOPs: 30.41 | 7: iteration 3390/ 7508 | consumed samples: 867840 | consumed tokens: 1777336320 | elapsed time per iteration (s): 0.29 | learning rate: 1.252E-04 | global batch size: 256 | lm loss: 4.003262E+00 | grad norm: 0.485 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.045 | TFLOPs: 30.42 | 7: iteration 3400/ 7508 | consumed samples: 870400 | consumed tokens: 1782579200 | elapsed time per iteration (s): 0.29 | learning rate: 1.248E-04 | global batch size: 256 | lm loss: 4.003825E+00 | grad norm: 0.423 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.161 | TFLOPs: 30.43 | 7: iteration 3410/ 7508 | consumed samples: 872960 | consumed tokens: 1787822080 | elapsed time per iteration (s): 0.29 | learning rate: 1.245E-04 | global batch size: 256 | lm loss: 3.994157E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.179 | TFLOPs: 30.43 | 7: iteration 3420/ 7508 | consumed samples: 875520 | consumed tokens: 1793064960 | elapsed time per iteration (s): 0.29 | learning rate: 1.241E-04 | global batch size: 256 | lm loss: 3.992382E+00 | grad norm: 0.477 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.742 | TFLOPs: 30.41 | 7: iteration 3430/ 7508 | consumed samples: 878080 | consumed tokens: 1798307840 | elapsed time per iteration (s): 0.29 | learning rate: 1.237E-04 | global batch size: 256 | lm loss: 3.995279E+00 | grad norm: 0.465 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.366 | TFLOPs: 30.40 | 7: iteration 3440/ 7508 | consumed samples: 880640 | consumed tokens: 1803550720 | elapsed time per iteration (s): 0.29 | learning rate: 1.233E-04 | global batch size: 256 | lm loss: 3.998169E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.087 | TFLOPs: 30.42 | 7: iteration 3450/ 7508 | consumed samples: 883200 | consumed tokens: 1808793600 | elapsed time per iteration (s): 0.29 | learning rate: 1.230E-04 | global batch size: 256 | lm loss: 3.990416E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.290 | TFLOPs: 30.40 | 7: iteration 3460/ 7508 | consumed samples: 885760 | consumed tokens: 1814036480 | elapsed time per iteration (s): 0.29 | learning rate: 1.226E-04 | global batch size: 256 | lm loss: 3.990469E+00 | grad norm: 0.478 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.549 | TFLOPs: 30.41 | 7: iteration 3470/ 7508 | consumed samples: 888320 | consumed tokens: 1819279360 | elapsed time per iteration (s): 0.29 | learning rate: 1.222E-04 | global batch size: 256 | lm loss: 3.991534E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.921 | TFLOPs: 30.42 | 7: iteration 3480/ 7508 | consumed samples: 890880 | consumed tokens: 1824522240 | elapsed time per iteration (s): 0.29 | learning rate: 1.218E-04 | global batch size: 256 | lm loss: 3.985250E+00 | grad norm: 0.441 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.930 | TFLOPs: 30.42 | 7: iteration 3490/ 7508 | consumed samples: 893440 | consumed tokens: 1829765120 | elapsed time per iteration (s): 0.29 | learning rate: 1.214E-04 | global batch size: 256 | lm loss: 3.985215E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.837 | TFLOPs: 30.42 | 7: iteration 3500/ 7508 | consumed samples: 896000 | consumed tokens: 1835008000 | elapsed time per iteration (s): 0.29 | learning rate: 1.211E-04 | global batch size: 256 | lm loss: 3.993533E+00 | grad norm: 0.521 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.841 | TFLOPs: 30.42 | 7: iteration 3510/ 7508 | consumed samples: 898560 | consumed tokens: 1840250880 | elapsed time per iteration (s): 0.29 | learning rate: 1.207E-04 | global batch size: 256 | lm loss: 3.981777E+00 | grad norm: 0.471 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.115 | TFLOPs: 30.43 | 7: iteration 3520/ 7508 | consumed samples: 901120 | consumed tokens: 1845493760 | elapsed time per iteration (s): 0.29 | learning rate: 1.203E-04 | global batch size: 256 | lm loss: 3.987037E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.140 | TFLOPs: 30.43 | 7: iteration 3530/ 7508 | consumed samples: 903680 | consumed tokens: 1850736640 | elapsed time per iteration (s): 0.31 | learning rate: 1.199E-04 | global batch size: 256 | lm loss: 3.986604E+00 | grad norm: 0.442 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 823.136 | TFLOPs: 28.82 | 7: iteration 3540/ 7508 | consumed samples: 906240 | consumed tokens: 1855979520 | elapsed time per iteration (s): 0.30 | learning rate: 1.196E-04 | global batch size: 256 | lm loss: 3.986409E+00 | grad norm: 0.483 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 847.479 | TFLOPs: 29.67 | 7: iteration 3550/ 7508 | consumed samples: 908800 | consumed tokens: 1861222400 | elapsed time per iteration (s): 0.30 | learning rate: 1.192E-04 | global batch size: 256 | lm loss: 3.979075E+00 | grad norm: 0.500 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.662 | TFLOPs: 30.06 | 7: iteration 3560/ 7508 | consumed samples: 911360 | consumed tokens: 1866465280 | elapsed time per iteration (s): 0.29 | learning rate: 1.188E-04 | global batch size: 256 | lm loss: 3.980143E+00 | grad norm: 0.490 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.940 | TFLOPs: 30.38 | 7: iteration 3570/ 7508 | consumed samples: 913920 | consumed tokens: 1871708160 | elapsed time per iteration (s): 0.30 | learning rate: 1.184E-04 | global batch size: 256 | lm loss: 3.988478E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.472 | TFLOPs: 30.37 | 7: iteration 3580/ 7508 | consumed samples: 916480 | consumed tokens: 1876951040 | elapsed time per iteration (s): 0.29 | learning rate: 1.180E-04 | global batch size: 256 | lm loss: 3.980798E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.317 | TFLOPs: 30.40 | 7: iteration 3590/ 7508 | consumed samples: 919040 | consumed tokens: 1882193920 | elapsed time per iteration (s): 0.30 | learning rate: 1.177E-04 | global batch size: 256 | lm loss: 3.978498E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.441 | TFLOPs: 30.37 | 7: iteration 3600/ 7508 | consumed samples: 921600 | consumed tokens: 1887436800 | elapsed time per iteration (s): 0.29 | learning rate: 1.173E-04 | global batch size: 256 | lm loss: 3.981897E+00 | grad norm: 0.442 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.987 | TFLOPs: 30.39 | 7: iteration 3610/ 7508 | consumed samples: 924160 | consumed tokens: 1892679680 | elapsed time per iteration (s): 0.30 | learning rate: 1.169E-04 | global batch size: 256 | lm loss: 3.980571E+00 | grad norm: 0.450 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.604 | TFLOPs: 30.37 | 7: iteration 3620/ 7508 | consumed samples: 926720 | consumed tokens: 1897922560 | elapsed time per iteration (s): 0.30 | learning rate: 1.165E-04 | global batch size: 256 | lm loss: 3.976195E+00 | grad norm: 0.479 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.324 | TFLOPs: 30.36 | 7: iteration 3630/ 7508 | consumed samples: 929280 | consumed tokens: 1903165440 | elapsed time per iteration (s): 0.30 | learning rate: 1.161E-04 | global batch size: 256 | lm loss: 3.970193E+00 | grad norm: 0.477 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.585 | TFLOPs: 30.37 | 7: iteration 3640/ 7508 | consumed samples: 931840 | consumed tokens: 1908408320 | elapsed time per iteration (s): 0.29 | learning rate: 1.158E-04 | global batch size: 256 | lm loss: 3.972495E+00 | grad norm: 0.490 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.094 | TFLOPs: 30.39 | 7: iteration 3650/ 7508 | consumed samples: 934400 | consumed tokens: 1913651200 | elapsed time per iteration (s): 0.29 | learning rate: 1.154E-04 | global batch size: 256 | lm loss: 3.971235E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.001 | TFLOPs: 30.39 | 7: iteration 3660/ 7508 | consumed samples: 936960 | consumed tokens: 1918894080 | elapsed time per iteration (s): 0.29 | learning rate: 1.150E-04 | global batch size: 256 | lm loss: 3.972910E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.998 | TFLOPs: 30.39 | 7: iteration 3670/ 7508 | consumed samples: 939520 | consumed tokens: 1924136960 | elapsed time per iteration (s): 0.29 | learning rate: 1.146E-04 | global batch size: 256 | lm loss: 3.968786E+00 | grad norm: 0.480 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.040 | TFLOPs: 30.39 | 7: iteration 3680/ 7508 | consumed samples: 942080 | consumed tokens: 1929379840 | elapsed time per iteration (s): 0.29 | learning rate: 1.142E-04 | global batch size: 256 | lm loss: 3.961386E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.105 | TFLOPs: 30.39 | 7: iteration 3690/ 7508 | consumed samples: 944640 | consumed tokens: 1934622720 | elapsed time per iteration (s): 0.29 | learning rate: 1.139E-04 | global batch size: 256 | lm loss: 3.971399E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.942 | TFLOPs: 30.38 | 7: iteration 3700/ 7508 | consumed samples: 947200 | consumed tokens: 1939865600 | elapsed time per iteration (s): 0.29 | learning rate: 1.135E-04 | global batch size: 256 | lm loss: 3.972411E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.210 | TFLOPs: 30.39 | 7: iteration 3710/ 7508 | consumed samples: 949760 | consumed tokens: 1945108480 | elapsed time per iteration (s): 0.29 | learning rate: 1.131E-04 | global batch size: 256 | lm loss: 3.961447E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.351 | TFLOPs: 30.40 | 7: iteration 3720/ 7508 | consumed samples: 952320 | consumed tokens: 1950351360 | elapsed time per iteration (s): 0.30 | learning rate: 1.127E-04 | global batch size: 256 | lm loss: 3.965704E+00 | grad norm: 0.518 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.435 | TFLOPs: 30.37 | 7: iteration 3730/ 7508 | consumed samples: 954880 | consumed tokens: 1955594240 | elapsed time per iteration (s): 0.30 | learning rate: 1.123E-04 | global batch size: 256 | lm loss: 3.956594E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.767 | TFLOPs: 30.38 | 7: iteration 3740/ 7508 | consumed samples: 957440 | consumed tokens: 1960837120 | elapsed time per iteration (s): 0.29 | learning rate: 1.120E-04 | global batch size: 256 | lm loss: 3.957201E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.995 | TFLOPs: 30.39 | 7: iteration 3750/ 7508 | consumed samples: 960000 | consumed tokens: 1966080000 | elapsed time per iteration (s): 0.29 | learning rate: 1.116E-04 | global batch size: 256 | lm loss: 3.968544E+00 | grad norm: 0.434 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.357 | TFLOPs: 30.40 | 7: iteration 3760/ 7508 | consumed samples: 962560 | consumed tokens: 1971322880 | elapsed time per iteration (s): 0.30 | learning rate: 1.112E-04 | global batch size: 256 | lm loss: 3.957128E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.490 | TFLOPs: 30.37 | 7: iteration 3770/ 7508 | consumed samples: 965120 | consumed tokens: 1976565760 | elapsed time per iteration (s): 0.29 | learning rate: 1.108E-04 | global batch size: 256 | lm loss: 3.957687E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.663 | TFLOPs: 30.41 | 7: iteration 3780/ 7508 | consumed samples: 967680 | consumed tokens: 1981808640 | elapsed time per iteration (s): 0.29 | learning rate: 1.104E-04 | global batch size: 256 | lm loss: 3.950490E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.579 | TFLOPs: 30.41 | 7: iteration 3790/ 7508 | consumed samples: 970240 | consumed tokens: 1987051520 | elapsed time per iteration (s): 0.29 | learning rate: 1.101E-04 | global batch size: 256 | lm loss: 3.959034E+00 | grad norm: 0.423 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.435 | TFLOPs: 30.40 | 7: iteration 3800/ 7508 | consumed samples: 972800 | consumed tokens: 1992294400 | elapsed time per iteration (s): 0.30 | learning rate: 1.097E-04 | global batch size: 256 | lm loss: 3.958796E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.777 | TFLOPs: 30.38 | 7: iteration 3810/ 7508 | consumed samples: 975360 | consumed tokens: 1997537280 | elapsed time per iteration (s): 0.30 | learning rate: 1.093E-04 | global batch size: 256 | lm loss: 3.963432E+00 | grad norm: 0.452 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.523 | TFLOPs: 30.12 | 7: iteration 3820/ 7508 | consumed samples: 977920 | consumed tokens: 2002780160 | elapsed time per iteration (s): 0.30 | learning rate: 1.089E-04 | global batch size: 256 | lm loss: 3.947489E+00 | grad norm: 0.452 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.263 | TFLOPs: 30.36 | 7: iteration 3830/ 7508 | consumed samples: 980480 | consumed tokens: 2008023040 | elapsed time per iteration (s): 0.30 | learning rate: 1.085E-04 | global batch size: 256 | lm loss: 3.954451E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.524 | TFLOPs: 30.37 | 7: iteration 3840/ 7508 | consumed samples: 983040 | consumed tokens: 2013265920 | elapsed time per iteration (s): 0.29 | learning rate: 1.082E-04 | global batch size: 256 | lm loss: 3.946903E+00 | grad norm: 0.434 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.046 | TFLOPs: 30.39 | 7: iteration 3850/ 7508 | consumed samples: 985600 | consumed tokens: 2018508800 | elapsed time per iteration (s): 0.29 | learning rate: 1.078E-04 | global batch size: 256 | lm loss: 3.945050E+00 | grad norm: 0.474 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.802 | TFLOPs: 30.38 | 7: iteration 3860/ 7508 | consumed samples: 988160 | consumed tokens: 2023751680 | elapsed time per iteration (s): 0.29 | learning rate: 1.074E-04 | global batch size: 256 | lm loss: 3.948083E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.856 | TFLOPs: 30.38 | 7: iteration 3870/ 7508 | consumed samples: 990720 | consumed tokens: 2028994560 | elapsed time per iteration (s): 0.30 | learning rate: 1.070E-04 | global batch size: 256 | lm loss: 3.944392E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.638 | TFLOPs: 30.37 | 7: iteration 3880/ 7508 | consumed samples: 993280 | consumed tokens: 2034237440 | elapsed time per iteration (s): 0.30 | learning rate: 1.066E-04 | global batch size: 256 | lm loss: 3.947073E+00 | grad norm: 0.456 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.734 | TFLOPs: 30.38 | 7: iteration 3890/ 7508 | consumed samples: 995840 | consumed tokens: 2039480320 | elapsed time per iteration (s): 0.30 | learning rate: 1.063E-04 | global batch size: 256 | lm loss: 3.952935E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.643 | TFLOPs: 30.37 | 7: iteration 3900/ 7508 | consumed samples: 998400 | consumed tokens: 2044723200 | elapsed time per iteration (s): 0.30 | learning rate: 1.059E-04 | global batch size: 256 | lm loss: 3.940564E+00 | grad norm: 0.491 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.956 | TFLOPs: 30.31 | 7: iteration 3910/ 7508 | consumed samples: 1000960 | consumed tokens: 2049966080 | elapsed time per iteration (s): 0.30 | learning rate: 1.055E-04 | global batch size: 256 | lm loss: 3.942976E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.558 | TFLOPs: 29.99 | 7: iteration 3920/ 7508 | consumed samples: 1003520 | consumed tokens: 2055208960 | elapsed time per iteration (s): 0.30 | learning rate: 1.051E-04 | global batch size: 256 | lm loss: 3.945861E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.644 | TFLOPs: 30.37 | 7: iteration 3930/ 7508 | consumed samples: 1006080 | consumed tokens: 2060451840 | elapsed time per iteration (s): 0.30 | learning rate: 1.047E-04 | global batch size: 256 | lm loss: 3.948680E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.049 | TFLOPs: 30.11 | 7: iteration 3940/ 7508 | consumed samples: 1008640 | consumed tokens: 2065694720 | elapsed time per iteration (s): 0.29 | learning rate: 1.044E-04 | global batch size: 256 | lm loss: 3.935606E+00 | grad norm: 0.448 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.223 | TFLOPs: 30.39 | 7: iteration 3950/ 7508 | consumed samples: 1011200 | consumed tokens: 2070937600 | elapsed time per iteration (s): 0.29 | learning rate: 1.040E-04 | global batch size: 256 | lm loss: 3.943368E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.550 | TFLOPs: 30.41 | 7: iteration 3960/ 7508 | consumed samples: 1013760 | consumed tokens: 2076180480 | elapsed time per iteration (s): 0.29 | learning rate: 1.036E-04 | global batch size: 256 | lm loss: 3.946190E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.951 | TFLOPs: 30.38 | 7: iteration 3970/ 7508 | consumed samples: 1016320 | consumed tokens: 2081423360 | elapsed time per iteration (s): 0.30 | learning rate: 1.032E-04 | global batch size: 256 | lm loss: 3.940848E+00 | grad norm: 0.432 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.419 | TFLOPs: 30.12 | 7: iteration 3980/ 7508 | consumed samples: 1018880 | consumed tokens: 2086666240 | elapsed time per iteration (s): 0.29 | learning rate: 1.028E-04 | global batch size: 256 | lm loss: 3.939610E+00 | grad norm: 0.434 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.127 | TFLOPs: 30.39 | 7: iteration 3990/ 7508 | consumed samples: 1021440 | consumed tokens: 2091909120 | elapsed time per iteration (s): 0.29 | learning rate: 1.025E-04 | global batch size: 256 | lm loss: 3.939184E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.174 | TFLOPs: 30.39 | 0: [2023-03-16 23:11:32,698] [INFO] [logging.py:68:log_dist] [Rank 0] step=4000, skipped=0, lr=[0.00010208850566272403, 0.00010208850566272403, 0.00010208850566272403], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 4000/ 7508 | consumed samples: 1024000 | consumed tokens: 2097152000 | elapsed time per iteration (s): 0.29 | learning rate: 1.021E-04 | global batch size: 256 | lm loss: 3.936742E+00 | grad norm: 0.452 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.218 | TFLOPs: 30.39 | 0: steps: 4000 loss: 3.9379 iter time (s): 0.293 samples/sec: 873.016 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 4000 | lm loss value: 3.962659E+00 | lm loss PPL: 5.259700E+01 | 7: ----------------------------------------------------------------------------------------------- 7: iteration 4010/ 7508 | consumed samples: 1026560 | consumed tokens: 2102394880 | elapsed time per iteration (s): 0.31 | learning rate: 1.017E-04 | global batch size: 256 | lm loss: 3.939825E+00 | grad norm: 0.467 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.835 | TFLOPs: 29.33 | 7: iteration 4020/ 7508 | consumed samples: 1029120 | consumed tokens: 2107637760 | elapsed time per iteration (s): 0.29 | learning rate: 1.013E-04 | global batch size: 256 | lm loss: 3.937803E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.662 | TFLOPs: 30.41 | 7: iteration 4030/ 7508 | consumed samples: 1031680 | consumed tokens: 2112880640 | elapsed time per iteration (s): 0.29 | learning rate: 1.010E-04 | global batch size: 256 | lm loss: 3.936838E+00 | grad norm: 0.463 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.927 | TFLOPs: 30.42 | 7: iteration 4040/ 7508 | consumed samples: 1034240 | consumed tokens: 2118123520 | elapsed time per iteration (s): 0.29 | learning rate: 1.006E-04 | global batch size: 256 | lm loss: 3.932527E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.677 | TFLOPs: 30.41 | 7: iteration 4050/ 7508 | consumed samples: 1036800 | consumed tokens: 2123366400 | elapsed time per iteration (s): 0.29 | learning rate: 1.002E-04 | global batch size: 256 | lm loss: 3.931726E+00 | grad norm: 0.423 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.467 | TFLOPs: 30.40 | 7: iteration 4060/ 7508 | consumed samples: 1039360 | consumed tokens: 2128609280 | elapsed time per iteration (s): 0.29 | learning rate: 9.982E-05 | global batch size: 256 | lm loss: 3.933136E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.865 | TFLOPs: 30.42 | 7: iteration 4070/ 7508 | consumed samples: 1041920 | consumed tokens: 2133852160 | elapsed time per iteration (s): 0.29 | learning rate: 9.944E-05 | global batch size: 256 | lm loss: 3.933916E+00 | grad norm: 0.441 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.297 | TFLOPs: 30.40 | 7: iteration 4080/ 7508 | consumed samples: 1044480 | consumed tokens: 2139095040 | elapsed time per iteration (s): 0.29 | learning rate: 9.906E-05 | global batch size: 256 | lm loss: 3.928770E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.188 | TFLOPs: 30.39 | 7: iteration 4090/ 7508 | consumed samples: 1047040 | consumed tokens: 2144337920 | elapsed time per iteration (s): 0.29 | learning rate: 9.868E-05 | global batch size: 256 | lm loss: 3.924287E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.151 | TFLOPs: 30.43 | 7: iteration 4100/ 7508 | consumed samples: 1049600 | consumed tokens: 2149580800 | elapsed time per iteration (s): 0.29 | learning rate: 9.831E-05 | global batch size: 256 | lm loss: 3.916913E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.785 | TFLOPs: 30.41 | 7: iteration 4110/ 7508 | consumed samples: 1052160 | consumed tokens: 2154823680 | elapsed time per iteration (s): 0.29 | learning rate: 9.793E-05 | global batch size: 256 | lm loss: 3.924800E+00 | grad norm: 0.466 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.800 | TFLOPs: 30.41 | 7: iteration 4120/ 7508 | consumed samples: 1054720 | consumed tokens: 2160066560 | elapsed time per iteration (s): 0.29 | learning rate: 9.755E-05 | global batch size: 256 | lm loss: 3.926799E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.845 | TFLOPs: 30.42 | 7: iteration 4130/ 7508 | consumed samples: 1057280 | consumed tokens: 2165309440 | elapsed time per iteration (s): 0.29 | learning rate: 9.718E-05 | global batch size: 256 | lm loss: 3.920244E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.771 | TFLOPs: 30.41 | 7: iteration 4140/ 7508 | consumed samples: 1059840 | consumed tokens: 2170552320 | elapsed time per iteration (s): 0.29 | learning rate: 9.680E-05 | global batch size: 256 | lm loss: 3.915890E+00 | grad norm: 0.505 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.457 | TFLOPs: 30.40 | 7: iteration 4150/ 7508 | consumed samples: 1062400 | consumed tokens: 2175795200 | elapsed time per iteration (s): 0.30 | learning rate: 9.642E-05 | global batch size: 256 | lm loss: 3.919303E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 844.012 | TFLOPs: 29.55 | 7: iteration 4160/ 7508 | consumed samples: 1064960 | consumed tokens: 2181038080 | elapsed time per iteration (s): 0.29 | learning rate: 9.605E-05 | global batch size: 256 | lm loss: 3.921948E+00 | grad norm: 0.497 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.342 | TFLOPs: 30.40 | 7: iteration 4170/ 7508 | consumed samples: 1067520 | consumed tokens: 2186280960 | elapsed time per iteration (s): 0.29 | learning rate: 9.567E-05 | global batch size: 256 | lm loss: 3.919427E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.581 | TFLOPs: 30.41 | 7: iteration 4180/ 7508 | consumed samples: 1070080 | consumed tokens: 2191523840 | elapsed time per iteration (s): 0.29 | learning rate: 9.530E-05 | global batch size: 256 | lm loss: 3.915093E+00 | grad norm: 0.421 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.142 | TFLOPs: 30.39 | 7: iteration 4190/ 7508 | consumed samples: 1072640 | consumed tokens: 2196766720 | elapsed time per iteration (s): 0.29 | learning rate: 9.492E-05 | global batch size: 256 | lm loss: 3.917890E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.248 | TFLOPs: 30.39 | 7: iteration 4200/ 7508 | consumed samples: 1075200 | consumed tokens: 2202009600 | elapsed time per iteration (s): 0.30 | learning rate: 9.455E-05 | global batch size: 256 | lm loss: 3.919370E+00 | grad norm: 0.441 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.706 | TFLOPs: 30.38 | 7: iteration 4210/ 7508 | consumed samples: 1077760 | consumed tokens: 2207252480 | elapsed time per iteration (s): 0.30 | learning rate: 9.417E-05 | global batch size: 256 | lm loss: 3.911735E+00 | grad norm: 0.454 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.226 | TFLOPs: 30.04 | 7: iteration 4220/ 7508 | consumed samples: 1080320 | consumed tokens: 2212495360 | elapsed time per iteration (s): 0.29 | learning rate: 9.380E-05 | global batch size: 256 | lm loss: 3.912853E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.665 | TFLOPs: 30.41 | 7: iteration 4230/ 7508 | consumed samples: 1082880 | consumed tokens: 2217738240 | elapsed time per iteration (s): 0.29 | learning rate: 9.342E-05 | global batch size: 256 | lm loss: 3.917852E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.553 | TFLOPs: 30.41 | 7: iteration 4240/ 7508 | consumed samples: 1085440 | consumed tokens: 2222981120 | elapsed time per iteration (s): 0.29 | learning rate: 9.305E-05 | global batch size: 256 | lm loss: 3.917835E+00 | grad norm: 0.457 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.170 | TFLOPs: 30.39 | 7: iteration 4250/ 7508 | consumed samples: 1088000 | consumed tokens: 2228224000 | elapsed time per iteration (s): 0.29 | learning rate: 9.268E-05 | global batch size: 256 | lm loss: 3.910600E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.409 | TFLOPs: 30.40 | 7: iteration 4260/ 7508 | consumed samples: 1090560 | consumed tokens: 2233466880 | elapsed time per iteration (s): 0.29 | learning rate: 9.230E-05 | global batch size: 256 | lm loss: 3.912196E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.030 | TFLOPs: 30.42 | 7: iteration 4270/ 7508 | consumed samples: 1093120 | consumed tokens: 2238709760 | elapsed time per iteration (s): 0.30 | learning rate: 9.193E-05 | global batch size: 256 | lm loss: 3.916418E+00 | grad norm: 0.437 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.253 | TFLOPs: 30.29 | 7: iteration 4280/ 7508 | consumed samples: 1095680 | consumed tokens: 2243952640 | elapsed time per iteration (s): 0.29 | learning rate: 9.156E-05 | global batch size: 256 | lm loss: 3.903660E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.205 | TFLOPs: 30.43 | 7: iteration 4290/ 7508 | consumed samples: 1098240 | consumed tokens: 2249195520 | elapsed time per iteration (s): 0.30 | learning rate: 9.119E-05 | global batch size: 256 | lm loss: 3.900047E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.988 | TFLOPs: 30.04 | 7: iteration 4300/ 7508 | consumed samples: 1100800 | consumed tokens: 2254438400 | elapsed time per iteration (s): 0.30 | learning rate: 9.082E-05 | global batch size: 256 | lm loss: 3.912791E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.740 | TFLOPs: 30.13 | 7: iteration 4310/ 7508 | consumed samples: 1103360 | consumed tokens: 2259681280 | elapsed time per iteration (s): 0.30 | learning rate: 9.044E-05 | global batch size: 256 | lm loss: 3.911827E+00 | grad norm: 0.442 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 864.810 | TFLOPs: 30.27 | 7: iteration 4320/ 7508 | consumed samples: 1105920 | consumed tokens: 2264924160 | elapsed time per iteration (s): 0.30 | learning rate: 9.007E-05 | global batch size: 256 | lm loss: 3.909059E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 864.604 | TFLOPs: 30.27 | 7: iteration 4330/ 7508 | consumed samples: 1108480 | consumed tokens: 2270167040 | elapsed time per iteration (s): 0.29 | learning rate: 8.970E-05 | global batch size: 256 | lm loss: 3.906305E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.530 | TFLOPs: 30.40 | 7: iteration 4340/ 7508 | consumed samples: 1111040 | consumed tokens: 2275409920 | elapsed time per iteration (s): 0.29 | learning rate: 8.933E-05 | global batch size: 256 | lm loss: 3.907364E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.462 | TFLOPs: 30.40 | 7: iteration 4350/ 7508 | consumed samples: 1113600 | consumed tokens: 2280652800 | elapsed time per iteration (s): 0.29 | learning rate: 8.896E-05 | global batch size: 256 | lm loss: 3.911613E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.767 | TFLOPs: 30.41 | 7: iteration 4360/ 7508 | consumed samples: 1116160 | consumed tokens: 2285895680 | elapsed time per iteration (s): 0.29 | learning rate: 8.859E-05 | global batch size: 256 | lm loss: 3.912123E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.600 | TFLOPs: 30.41 | 7: iteration 4370/ 7508 | consumed samples: 1118720 | consumed tokens: 2291138560 | elapsed time per iteration (s): 0.30 | learning rate: 8.822E-05 | global batch size: 256 | lm loss: 3.903634E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.649 | TFLOPs: 30.37 | 7: iteration 4380/ 7508 | consumed samples: 1121280 | consumed tokens: 2296381440 | elapsed time per iteration (s): 0.30 | learning rate: 8.785E-05 | global batch size: 256 | lm loss: 3.901829E+00 | grad norm: 0.450 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.364 | TFLOPs: 30.36 | 7: iteration 4390/ 7508 | consumed samples: 1123840 | consumed tokens: 2301624320 | elapsed time per iteration (s): 0.29 | learning rate: 8.749E-05 | global batch size: 256 | lm loss: 3.894793E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.927 | TFLOPs: 30.38 | 7: iteration 4400/ 7508 | consumed samples: 1126400 | consumed tokens: 2306867200 | elapsed time per iteration (s): 0.29 | learning rate: 8.712E-05 | global batch size: 256 | lm loss: 3.903096E+00 | grad norm: 0.470 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.131 | TFLOPs: 30.39 | 7: iteration 4410/ 7508 | consumed samples: 1128960 | consumed tokens: 2312110080 | elapsed time per iteration (s): 0.29 | learning rate: 8.675E-05 | global batch size: 256 | lm loss: 3.908083E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.189 | TFLOPs: 30.39 | 7: iteration 4420/ 7508 | consumed samples: 1131520 | consumed tokens: 2317352960 | elapsed time per iteration (s): 0.29 | learning rate: 8.638E-05 | global batch size: 256 | lm loss: 3.899397E+00 | grad norm: 0.426 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.377 | TFLOPs: 30.40 | 7: iteration 4430/ 7508 | consumed samples: 1134080 | consumed tokens: 2322595840 | elapsed time per iteration (s): 0.30 | learning rate: 8.602E-05 | global batch size: 256 | lm loss: 3.901009E+00 | grad norm: 0.455 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.992 | TFLOPs: 29.97 | 7: iteration 4440/ 7508 | consumed samples: 1136640 | consumed tokens: 2327838720 | elapsed time per iteration (s): 0.29 | learning rate: 8.565E-05 | global batch size: 256 | lm loss: 3.899742E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.482 | TFLOPs: 30.44 | 7: iteration 4450/ 7508 | consumed samples: 1139200 | consumed tokens: 2333081600 | elapsed time per iteration (s): 0.29 | learning rate: 8.528E-05 | global batch size: 256 | lm loss: 3.899370E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.499 | TFLOPs: 30.44 | 7: iteration 4460/ 7508 | consumed samples: 1141760 | consumed tokens: 2338324480 | elapsed time per iteration (s): 0.29 | learning rate: 8.492E-05 | global batch size: 256 | lm loss: 3.908639E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.344 | TFLOPs: 30.43 | 7: iteration 4470/ 7508 | consumed samples: 1144320 | consumed tokens: 2343567360 | elapsed time per iteration (s): 0.29 | learning rate: 8.455E-05 | global batch size: 256 | lm loss: 3.895926E+00 | grad norm: 0.484 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.545 | TFLOPs: 30.44 | 7: iteration 4480/ 7508 | consumed samples: 1146880 | consumed tokens: 2348810240 | elapsed time per iteration (s): 0.29 | learning rate: 8.419E-05 | global batch size: 256 | lm loss: 3.895453E+00 | grad norm: 0.457 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.309 | TFLOPs: 30.43 | 7: iteration 4490/ 7508 | consumed samples: 1149440 | consumed tokens: 2354053120 | elapsed time per iteration (s): 0.29 | learning rate: 8.382E-05 | global batch size: 256 | lm loss: 3.896726E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.896 | TFLOPs: 30.42 | 7: iteration 4500/ 7508 | consumed samples: 1152000 | consumed tokens: 2359296000 | elapsed time per iteration (s): 0.29 | learning rate: 8.346E-05 | global batch size: 256 | lm loss: 3.889480E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.141 | TFLOPs: 30.43 | 7: iteration 4510/ 7508 | consumed samples: 1154560 | consumed tokens: 2364538880 | elapsed time per iteration (s): 0.29 | learning rate: 8.310E-05 | global batch size: 256 | lm loss: 3.898359E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.057 | TFLOPs: 30.42 | 7: iteration 4520/ 7508 | consumed samples: 1157120 | consumed tokens: 2369781760 | elapsed time per iteration (s): 0.29 | learning rate: 8.273E-05 | global batch size: 256 | lm loss: 3.891165E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.031 | TFLOPs: 30.42 | 7: iteration 4530/ 7508 | consumed samples: 1159680 | consumed tokens: 2375024640 | elapsed time per iteration (s): 0.30 | learning rate: 8.237E-05 | global batch size: 256 | lm loss: 3.880109E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.415 | TFLOPs: 30.02 | 7: iteration 4540/ 7508 | consumed samples: 1162240 | consumed tokens: 2380267520 | elapsed time per iteration (s): 0.29 | learning rate: 8.201E-05 | global batch size: 256 | lm loss: 3.886491E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.622 | TFLOPs: 30.41 | 7: iteration 4550/ 7508 | consumed samples: 1164800 | consumed tokens: 2385510400 | elapsed time per iteration (s): 0.30 | learning rate: 8.165E-05 | global batch size: 256 | lm loss: 3.888776E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.248 | TFLOPs: 30.04 | 7: iteration 4560/ 7508 | consumed samples: 1167360 | consumed tokens: 2390753280 | elapsed time per iteration (s): 0.29 | learning rate: 8.129E-05 | global batch size: 256 | lm loss: 3.886836E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.271 | TFLOPs: 30.40 | 7: iteration 4570/ 7508 | consumed samples: 1169920 | consumed tokens: 2395996160 | elapsed time per iteration (s): 0.30 | learning rate: 8.093E-05 | global batch size: 256 | lm loss: 3.887727E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.567 | TFLOPs: 29.99 | 7: iteration 4580/ 7508 | consumed samples: 1172480 | consumed tokens: 2401239040 | elapsed time per iteration (s): 0.30 | learning rate: 8.057E-05 | global batch size: 256 | lm loss: 3.881500E+00 | grad norm: 0.423 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 839.892 | TFLOPs: 29.40 | 7: iteration 4590/ 7508 | consumed samples: 1175040 | consumed tokens: 2406481920 | elapsed time per iteration (s): 0.29 | learning rate: 8.021E-05 | global batch size: 256 | lm loss: 3.884024E+00 | grad norm: 0.466 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.180 | TFLOPs: 30.39 | 7: iteration 4600/ 7508 | consumed samples: 1177600 | consumed tokens: 2411724800 | elapsed time per iteration (s): 0.29 | learning rate: 7.985E-05 | global batch size: 256 | lm loss: 3.889787E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.858 | TFLOPs: 30.38 | 7: iteration 4610/ 7508 | consumed samples: 1180160 | consumed tokens: 2416967680 | elapsed time per iteration (s): 0.29 | learning rate: 7.949E-05 | global batch size: 256 | lm loss: 3.886699E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.292 | TFLOPs: 30.40 | 7: iteration 4620/ 7508 | consumed samples: 1182720 | consumed tokens: 2422210560 | elapsed time per iteration (s): 0.29 | learning rate: 7.913E-05 | global batch size: 256 | lm loss: 3.882534E+00 | grad norm: 0.432 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.619 | TFLOPs: 30.41 | 7: iteration 4630/ 7508 | consumed samples: 1185280 | consumed tokens: 2427453440 | elapsed time per iteration (s): 0.29 | learning rate: 7.878E-05 | global batch size: 256 | lm loss: 3.881941E+00 | grad norm: 0.442 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.570 | TFLOPs: 30.41 | 7: iteration 4640/ 7508 | consumed samples: 1187840 | consumed tokens: 2432696320 | elapsed time per iteration (s): 0.29 | learning rate: 7.842E-05 | global batch size: 256 | lm loss: 3.885246E+00 | grad norm: 0.457 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.592 | TFLOPs: 30.41 | 7: iteration 4650/ 7508 | consumed samples: 1190400 | consumed tokens: 2437939200 | elapsed time per iteration (s): 0.30 | learning rate: 7.807E-05 | global batch size: 256 | lm loss: 3.877881E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 848.424 | TFLOPs: 29.70 | 7: iteration 4660/ 7508 | consumed samples: 1192960 | consumed tokens: 2443182080 | elapsed time per iteration (s): 0.30 | learning rate: 7.771E-05 | global batch size: 256 | lm loss: 3.889363E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.911 | TFLOPs: 30.35 | 7: iteration 4670/ 7508 | consumed samples: 1195520 | consumed tokens: 2448424960 | elapsed time per iteration (s): 0.30 | learning rate: 7.736E-05 | global batch size: 256 | lm loss: 3.880320E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.362 | TFLOPs: 30.36 | 7: iteration 4680/ 7508 | consumed samples: 1198080 | consumed tokens: 2453667840 | elapsed time per iteration (s): 0.30 | learning rate: 7.700E-05 | global batch size: 256 | lm loss: 3.878859E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.028 | TFLOPs: 30.35 | 7: iteration 4690/ 7508 | consumed samples: 1200640 | consumed tokens: 2458910720 | elapsed time per iteration (s): 0.30 | learning rate: 7.665E-05 | global batch size: 256 | lm loss: 3.884475E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.590 | TFLOPs: 30.34 | 7: iteration 4700/ 7508 | consumed samples: 1203200 | consumed tokens: 2464153600 | elapsed time per iteration (s): 0.30 | learning rate: 7.629E-05 | global batch size: 256 | lm loss: 3.880579E+00 | grad norm: 0.485 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.249 | TFLOPs: 30.36 | 7: iteration 4710/ 7508 | consumed samples: 1205760 | consumed tokens: 2469396480 | elapsed time per iteration (s): 0.30 | learning rate: 7.594E-05 | global batch size: 256 | lm loss: 3.880830E+00 | grad norm: 0.434 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.826 | TFLOPs: 30.35 | 7: iteration 4720/ 7508 | consumed samples: 1208320 | consumed tokens: 2474639360 | elapsed time per iteration (s): 0.30 | learning rate: 7.559E-05 | global batch size: 256 | lm loss: 3.875607E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.147 | TFLOPs: 30.36 | 7: iteration 4730/ 7508 | consumed samples: 1210880 | consumed tokens: 2479882240 | elapsed time per iteration (s): 0.30 | learning rate: 7.524E-05 | global batch size: 256 | lm loss: 3.876142E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.934 | TFLOPs: 30.35 | 7: iteration 4740/ 7508 | consumed samples: 1213440 | consumed tokens: 2485125120 | elapsed time per iteration (s): 0.29 | learning rate: 7.489E-05 | global batch size: 256 | lm loss: 3.879628E+00 | grad norm: 0.386 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.596 | TFLOPs: 30.41 | 7: iteration 4750/ 7508 | consumed samples: 1216000 | consumed tokens: 2490368000 | elapsed time per iteration (s): 0.29 | learning rate: 7.454E-05 | global batch size: 256 | lm loss: 3.870012E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.046 | TFLOPs: 30.42 | 7: iteration 4760/ 7508 | consumed samples: 1218560 | consumed tokens: 2495610880 | elapsed time per iteration (s): 0.29 | learning rate: 7.419E-05 | global batch size: 256 | lm loss: 3.876524E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.052 | TFLOPs: 30.42 | 7: iteration 4770/ 7508 | consumed samples: 1221120 | consumed tokens: 2500853760 | elapsed time per iteration (s): 0.29 | learning rate: 7.384E-05 | global batch size: 256 | lm loss: 3.868264E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.219 | TFLOPs: 30.43 | 7: iteration 4780/ 7508 | consumed samples: 1223680 | consumed tokens: 2506096640 | elapsed time per iteration (s): 0.29 | learning rate: 7.349E-05 | global batch size: 256 | lm loss: 3.867371E+00 | grad norm: 0.442 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.072 | TFLOPs: 30.42 | 7: iteration 4790/ 7508 | consumed samples: 1226240 | consumed tokens: 2511339520 | elapsed time per iteration (s): 0.30 | learning rate: 7.315E-05 | global batch size: 256 | lm loss: 3.875808E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 864.220 | TFLOPs: 30.25 | 7: iteration 4800/ 7508 | consumed samples: 1228800 | consumed tokens: 2516582400 | elapsed time per iteration (s): 0.29 | learning rate: 7.280E-05 | global batch size: 256 | lm loss: 3.870295E+00 | grad norm: 0.434 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.690 | TFLOPs: 30.41 | 7: iteration 4810/ 7508 | consumed samples: 1231360 | consumed tokens: 2521825280 | elapsed time per iteration (s): 0.29 | learning rate: 7.245E-05 | global batch size: 256 | lm loss: 3.867918E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.054 | TFLOPs: 30.42 | 7: iteration 4820/ 7508 | consumed samples: 1233920 | consumed tokens: 2527068160 | elapsed time per iteration (s): 0.29 | learning rate: 7.211E-05 | global batch size: 256 | lm loss: 3.875624E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.569 | TFLOPs: 30.41 | 7: iteration 4830/ 7508 | consumed samples: 1236480 | consumed tokens: 2532311040 | elapsed time per iteration (s): 0.29 | learning rate: 7.176E-05 | global batch size: 256 | lm loss: 3.863884E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.250 | TFLOPs: 30.43 | 7: iteration 4840/ 7508 | consumed samples: 1239040 | consumed tokens: 2537553920 | elapsed time per iteration (s): 0.29 | learning rate: 7.142E-05 | global batch size: 256 | lm loss: 3.871005E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.033 | TFLOPs: 30.42 | 7: iteration 4850/ 7508 | consumed samples: 1241600 | consumed tokens: 2542796800 | elapsed time per iteration (s): 0.29 | learning rate: 7.108E-05 | global batch size: 256 | lm loss: 3.863798E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.210 | TFLOPs: 30.43 | 7: iteration 4860/ 7508 | consumed samples: 1244160 | consumed tokens: 2548039680 | elapsed time per iteration (s): 0.29 | learning rate: 7.073E-05 | global batch size: 256 | lm loss: 3.866684E+00 | grad norm: 0.427 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.293 | TFLOPs: 30.43 | 7: iteration 4870/ 7508 | consumed samples: 1246720 | consumed tokens: 2553282560 | elapsed time per iteration (s): 0.29 | learning rate: 7.039E-05 | global batch size: 256 | lm loss: 3.878104E+00 | grad norm: 0.447 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.782 | TFLOPs: 30.41 | 7: iteration 4880/ 7508 | consumed samples: 1249280 | consumed tokens: 2558525440 | elapsed time per iteration (s): 0.30 | learning rate: 7.005E-05 | global batch size: 256 | lm loss: 3.867819E+00 | grad norm: 0.421 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.557 | TFLOPs: 30.13 | 7: iteration 4890/ 7508 | consumed samples: 1251840 | consumed tokens: 2563768320 | elapsed time per iteration (s): 0.29 | learning rate: 6.971E-05 | global batch size: 256 | lm loss: 3.859233E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.043 | TFLOPs: 30.42 | 7: iteration 4900/ 7508 | consumed samples: 1254400 | consumed tokens: 2569011200 | elapsed time per iteration (s): 0.29 | learning rate: 6.937E-05 | global batch size: 256 | lm loss: 3.862043E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.920 | TFLOPs: 30.42 | 7: iteration 4910/ 7508 | consumed samples: 1256960 | consumed tokens: 2574254080 | elapsed time per iteration (s): 0.29 | learning rate: 6.903E-05 | global batch size: 256 | lm loss: 3.860347E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.441 | TFLOPs: 30.40 | 7: iteration 4920/ 7508 | consumed samples: 1259520 | consumed tokens: 2579496960 | elapsed time per iteration (s): 0.29 | learning rate: 6.869E-05 | global batch size: 256 | lm loss: 3.869130E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.336 | TFLOPs: 30.40 | 7: iteration 4930/ 7508 | consumed samples: 1262080 | consumed tokens: 2584739840 | elapsed time per iteration (s): 0.29 | learning rate: 6.835E-05 | global batch size: 256 | lm loss: 3.860358E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.532 | TFLOPs: 30.40 | 7: iteration 4940/ 7508 | consumed samples: 1264640 | consumed tokens: 2589982720 | elapsed time per iteration (s): 0.29 | learning rate: 6.802E-05 | global batch size: 256 | lm loss: 3.861964E+00 | grad norm: 0.432 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.641 | TFLOPs: 30.41 | 7: iteration 4950/ 7508 | consumed samples: 1267200 | consumed tokens: 2595225600 | elapsed time per iteration (s): 0.29 | learning rate: 6.768E-05 | global batch size: 256 | lm loss: 3.863991E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.300 | TFLOPs: 30.43 | 7: iteration 4960/ 7508 | consumed samples: 1269760 | consumed tokens: 2600468480 | elapsed time per iteration (s): 0.29 | learning rate: 6.735E-05 | global batch size: 256 | lm loss: 3.864890E+00 | grad norm: 0.426 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.764 | TFLOPs: 30.41 | 7: iteration 4970/ 7508 | consumed samples: 1272320 | consumed tokens: 2605711360 | elapsed time per iteration (s): 0.30 | learning rate: 6.701E-05 | global batch size: 256 | lm loss: 3.864403E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.539 | TFLOPs: 30.02 | 7: iteration 4980/ 7508 | consumed samples: 1274880 | consumed tokens: 2610954240 | elapsed time per iteration (s): 0.29 | learning rate: 6.668E-05 | global batch size: 256 | lm loss: 3.856442E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.070 | TFLOPs: 30.42 | 7: iteration 4990/ 7508 | consumed samples: 1277440 | consumed tokens: 2616197120 | elapsed time per iteration (s): 0.29 | learning rate: 6.634E-05 | global batch size: 256 | lm loss: 3.864280E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.867 | TFLOPs: 30.42 | 7: iteration 5000/ 7508 | consumed samples: 1280000 | consumed tokens: 2621440000 | elapsed time per iteration (s): 0.29 | learning rate: 6.601E-05 | global batch size: 256 | lm loss: 3.851756E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.607 | TFLOPs: 30.41 | 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 5000 | lm loss value: 3.883558E+00 | lm loss PPL: 4.859683E+01 | 7: ----------------------------------------------------------------------------------------------- 0: saving checkpoint at iteration 5000 to checkpoints_146m3b9100mdedup 0: [2023-03-16 23:16:28,293] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step5000 is begin to save! 0: [2023-03-16 23:16:28,368] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_01-model_00-model_states.pt... 0: [2023-03-16 23:16:28,508] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_01-model_00-model_states.pt. 0: [2023-03-16 23:16:28,508] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_03-model_00-model_states.pt... 0: [2023-03-16 23:16:28,524] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_03-model_00-model_states.pt. 0: [2023-03-16 23:16:28,525] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_04-model_00-model_states.pt... 0: [2023-03-16 23:16:28,540] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_04-model_00-model_states.pt. 0: [2023-03-16 23:16:28,540] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_05-model_00-model_states.pt... 0: [2023-03-16 23:16:28,555] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_05-model_00-model_states.pt. 0: [2023-03-16 23:16:28,555] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_06-model_00-model_states.pt... 0: [2023-03-16 23:16:28,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_06-model_00-model_states.pt. 0: [2023-03-16 23:16:28,570] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_07-model_00-model_states.pt... 0: [2023-03-16 23:16:28,585] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_07-model_00-model_states.pt. 0: [2023-03-16 23:16:28,585] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_08-model_00-model_states.pt... 0: [2023-03-16 23:16:28,600] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_08-model_00-model_states.pt. 0: [2023-03-16 23:16:28,601] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_09-model_00-model_states.pt... 0: [2023-03-16 23:16:28,615] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_09-model_00-model_states.pt. 0: [2023-03-16 23:16:28,616] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_10-model_00-model_states.pt... 0: [2023-03-16 23:16:28,631] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_10-model_00-model_states.pt. 0: [2023-03-16 23:16:28,631] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_11-model_00-model_states.pt... 0: [2023-03-16 23:16:28,646] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_11-model_00-model_states.pt. 0: [2023-03-16 23:16:28,646] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_12-model_00-model_states.pt... 0: [2023-03-16 23:16:28,661] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_12-model_00-model_states.pt. 0: [2023-03-16 23:16:28,661] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_13-model_00-model_states.pt... 0: [2023-03-16 23:16:28,676] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_13-model_00-model_states.pt. 0: [2023-03-16 23:16:28,676] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_14-model_00-model_states.pt... 0: [2023-03-16 23:16:28,691] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_14-model_00-model_states.pt. 0: [2023-03-16 23:16:28,692] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_15-model_00-model_states.pt... 0: [2023-03-16 23:16:28,707] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_15-model_00-model_states.pt. 0: [2023-03-16 23:16:28,707] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_16-model_00-model_states.pt... 0: [2023-03-16 23:16:28,722] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_16-model_00-model_states.pt. 0: [2023-03-16 23:16:28,722] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_17-model_00-model_states.pt... 0: [2023-03-16 23:16:28,737] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_17-model_00-model_states.pt. 0: [2023-03-16 23:16:28,737] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/layer_19-model_00-model_states.pt... 0: [2023-03-16 23:16:28,738] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/layer_19-model_00-model_states.pt. 0: [2023-03-16 23:16:28,739] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_146m3b9100mdedup/global_step5000/mp_rank_00_model_states.pt 0: [2023-03-16 23:16:28,739] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/mp_rank_00_model_states.pt... 0: [2023-03-16 23:16:28,742] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/mp_rank_00_model_states.pt. 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,792] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,795] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,795] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,795] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: [2023-03-16 23:16:28,795] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,796] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,796] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: [2023-03-16 23:16:28,796] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,796] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,796] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: [2023-03-16 23:16:28,796] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,796] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,796] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,796] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,796] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: [2023-03-16 23:16:28,797] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: [2023-03-16 23:16:28,797] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,797] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,797] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:16:28,761] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:16:28,803] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:16:28,803] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,803] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,831] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,831] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,831] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,831] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,832] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,832] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 4: [2023-03-16 23:16:28,834] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:16:28,835] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2023-03-16 23:16:28,835] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,835] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,835] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,835] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,836] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,837] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,837] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,837] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 6: [2023-03-16 23:16:28,838] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:16:28,838] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2023-03-16 23:16:28,838] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,840] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,840] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 0: [2023-03-16 23:16:28,840] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,840] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: [2023-03-16 23:16:28,840] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,841] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,841] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,841] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,841] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,841] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,841] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 2: [2023-03-16 23:16:28,844] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:16:28,844] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2023-03-16 23:16:28,845] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,845] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: [2023-03-16 23:16:28,846] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2023-03-16 23:16:28,846] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,849] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,849] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,849] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,849] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,849] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,849] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,849] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,849] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,849] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,852] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,855] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,855] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,855] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,856] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,856] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,856] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,856] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,856] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,856] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 3: [2023-03-16 23:16:28,858] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:16:28,858] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2023-03-16 23:16:28,858] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 0: successfully saved checkpoint at iteration 5000 to checkpoints_146m3b9100mdedup 7: time (ms) | save-checkpoint: 572.89 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,853] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,853] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step5000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2023-03-16 23:16:28,854] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,854] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,854] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,854] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 1: [2023-03-16 23:16:28,854] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step5000 is ready now! 7: iteration 5010/ 7508 | consumed samples: 1282560 | consumed tokens: 2626682880 | elapsed time per iteration (s): 0.36 | learning rate: 6.568E-05 | global batch size: 256 | lm loss: 3.860845E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 708.901 | TFLOPs: 24.82 | 7: iteration 5020/ 7508 | consumed samples: 1285120 | consumed tokens: 2631925760 | elapsed time per iteration (s): 0.29 | learning rate: 6.535E-05 | global batch size: 256 | lm loss: 3.857074E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.242 | TFLOPs: 30.43 | 7: iteration 5030/ 7508 | consumed samples: 1287680 | consumed tokens: 2637168640 | elapsed time per iteration (s): 0.29 | learning rate: 6.502E-05 | global batch size: 256 | lm loss: 3.856945E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.230 | TFLOPs: 30.43 | 7: iteration 5040/ 7508 | consumed samples: 1290240 | consumed tokens: 2642411520 | elapsed time per iteration (s): 0.29 | learning rate: 6.469E-05 | global batch size: 256 | lm loss: 3.856787E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.809 | TFLOPs: 30.45 | 7: iteration 5050/ 7508 | consumed samples: 1292800 | consumed tokens: 2647654400 | elapsed time per iteration (s): 0.30 | learning rate: 6.436E-05 | global batch size: 256 | lm loss: 3.863651E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.749 | TFLOPs: 30.10 | 7: iteration 5060/ 7508 | consumed samples: 1295360 | consumed tokens: 2652897280 | elapsed time per iteration (s): 0.30 | learning rate: 6.404E-05 | global batch size: 256 | lm loss: 3.853600E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.684 | TFLOPs: 30.10 | 7: iteration 5070/ 7508 | consumed samples: 1297920 | consumed tokens: 2658140160 | elapsed time per iteration (s): 0.30 | learning rate: 6.371E-05 | global batch size: 256 | lm loss: 3.848764E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.759 | TFLOPs: 30.20 | 7: iteration 5080/ 7508 | consumed samples: 1300480 | consumed tokens: 2663383040 | elapsed time per iteration (s): 0.29 | learning rate: 6.338E-05 | global batch size: 256 | lm loss: 3.861921E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.520 | TFLOPs: 30.44 | 7: iteration 5090/ 7508 | consumed samples: 1303040 | consumed tokens: 2668625920 | elapsed time per iteration (s): 0.29 | learning rate: 6.306E-05 | global batch size: 256 | lm loss: 3.856953E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.687 | TFLOPs: 30.45 | 7: iteration 5100/ 7508 | consumed samples: 1305600 | consumed tokens: 2673868800 | elapsed time per iteration (s): 0.29 | learning rate: 6.273E-05 | global batch size: 256 | lm loss: 3.849049E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.535 | TFLOPs: 30.44 | 7: iteration 5110/ 7508 | consumed samples: 1308160 | consumed tokens: 2679111680 | elapsed time per iteration (s): 0.29 | learning rate: 6.241E-05 | global batch size: 256 | lm loss: 3.851581E+00 | grad norm: 0.453 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.381 | TFLOPs: 30.43 | 7: iteration 5120/ 7508 | consumed samples: 1310720 | consumed tokens: 2684354560 | elapsed time per iteration (s): 0.29 | learning rate: 6.209E-05 | global batch size: 256 | lm loss: 3.852163E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.852 | TFLOPs: 30.45 | 7: iteration 5130/ 7508 | consumed samples: 1313280 | consumed tokens: 2689597440 | elapsed time per iteration (s): 0.29 | learning rate: 6.177E-05 | global batch size: 256 | lm loss: 3.848816E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.071 | TFLOPs: 30.46 | 7: iteration 5140/ 7508 | consumed samples: 1315840 | consumed tokens: 2694840320 | elapsed time per iteration (s): 0.29 | learning rate: 6.145E-05 | global batch size: 256 | lm loss: 3.842785E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.585 | TFLOPs: 30.44 | 7: iteration 5150/ 7508 | consumed samples: 1318400 | consumed tokens: 2700083200 | elapsed time per iteration (s): 0.29 | learning rate: 6.113E-05 | global batch size: 256 | lm loss: 3.858166E+00 | grad norm: 0.445 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.274 | TFLOPs: 30.40 | 7: iteration 5160/ 7508 | consumed samples: 1320960 | consumed tokens: 2705326080 | elapsed time per iteration (s): 0.29 | learning rate: 6.081E-05 | global batch size: 256 | lm loss: 3.840981E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.373 | TFLOPs: 30.40 | 7: iteration 5170/ 7508 | consumed samples: 1323520 | consumed tokens: 2710568960 | elapsed time per iteration (s): 0.30 | learning rate: 6.049E-05 | global batch size: 256 | lm loss: 3.849232E+00 | grad norm: 0.441 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.500 | TFLOPs: 30.12 | 7: iteration 5180/ 7508 | consumed samples: 1326080 | consumed tokens: 2715811840 | elapsed time per iteration (s): 0.29 | learning rate: 6.017E-05 | global batch size: 256 | lm loss: 3.840411E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.337 | TFLOPs: 30.40 | 7: iteration 5190/ 7508 | consumed samples: 1328640 | consumed tokens: 2721054720 | elapsed time per iteration (s): 0.29 | learning rate: 5.986E-05 | global batch size: 256 | lm loss: 3.846741E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.110 | TFLOPs: 30.39 | 7: iteration 5200/ 7508 | consumed samples: 1331200 | consumed tokens: 2726297600 | elapsed time per iteration (s): 0.29 | learning rate: 5.954E-05 | global batch size: 256 | lm loss: 3.848927E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.751 | TFLOPs: 30.41 | 7: iteration 5210/ 7508 | consumed samples: 1333760 | consumed tokens: 2731540480 | elapsed time per iteration (s): 0.29 | learning rate: 5.923E-05 | global batch size: 256 | lm loss: 3.845015E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.118 | TFLOPs: 30.39 | 7: iteration 5220/ 7508 | consumed samples: 1336320 | consumed tokens: 2736783360 | elapsed time per iteration (s): 0.29 | learning rate: 5.891E-05 | global batch size: 256 | lm loss: 3.839911E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.700 | TFLOPs: 30.41 | 7: iteration 5230/ 7508 | consumed samples: 1338880 | consumed tokens: 2742026240 | elapsed time per iteration (s): 0.29 | learning rate: 5.860E-05 | global batch size: 256 | lm loss: 3.843142E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.278 | TFLOPs: 30.43 | 7: iteration 5240/ 7508 | consumed samples: 1341440 | consumed tokens: 2747269120 | elapsed time per iteration (s): 0.29 | learning rate: 5.829E-05 | global batch size: 256 | lm loss: 3.847757E+00 | grad norm: 0.421 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.947 | TFLOPs: 30.42 | 7: iteration 5250/ 7508 | consumed samples: 1344000 | consumed tokens: 2752512000 | elapsed time per iteration (s): 0.29 | learning rate: 5.798E-05 | global batch size: 256 | lm loss: 3.851442E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.943 | TFLOPs: 30.38 | 7: iteration 5260/ 7508 | consumed samples: 1346560 | consumed tokens: 2757754880 | elapsed time per iteration (s): 0.29 | learning rate: 5.767E-05 | global batch size: 256 | lm loss: 3.851361E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.154 | TFLOPs: 30.39 | 7: iteration 5270/ 7508 | consumed samples: 1349120 | consumed tokens: 2762997760 | elapsed time per iteration (s): 0.29 | learning rate: 5.736E-05 | global batch size: 256 | lm loss: 3.841211E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.393 | TFLOPs: 30.40 | 7: iteration 5280/ 7508 | consumed samples: 1351680 | consumed tokens: 2768240640 | elapsed time per iteration (s): 0.29 | learning rate: 5.705E-05 | global batch size: 256 | lm loss: 3.834391E+00 | grad norm: 0.438 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.257 | TFLOPs: 30.40 | 7: iteration 5290/ 7508 | consumed samples: 1354240 | consumed tokens: 2773483520 | elapsed time per iteration (s): 0.29 | learning rate: 5.674E-05 | global batch size: 256 | lm loss: 3.841656E+00 | grad norm: 0.444 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.007 | TFLOPs: 30.39 | 7: iteration 5300/ 7508 | consumed samples: 1356800 | consumed tokens: 2778726400 | elapsed time per iteration (s): 0.29 | learning rate: 5.644E-05 | global batch size: 256 | lm loss: 3.844041E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.904 | TFLOPs: 30.38 | 7: iteration 5310/ 7508 | consumed samples: 1359360 | consumed tokens: 2783969280 | elapsed time per iteration (s): 0.29 | learning rate: 5.613E-05 | global batch size: 256 | lm loss: 3.847498E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.162 | TFLOPs: 30.39 | 7: iteration 5320/ 7508 | consumed samples: 1361920 | consumed tokens: 2789212160 | elapsed time per iteration (s): 0.30 | learning rate: 5.583E-05 | global batch size: 256 | lm loss: 3.840045E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.782 | TFLOPs: 30.38 | 7: iteration 5330/ 7508 | consumed samples: 1364480 | consumed tokens: 2794455040 | elapsed time per iteration (s): 0.29 | learning rate: 5.552E-05 | global batch size: 256 | lm loss: 3.832684E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.959 | TFLOPs: 30.38 | 7: iteration 5340/ 7508 | consumed samples: 1367040 | consumed tokens: 2799697920 | elapsed time per iteration (s): 0.30 | learning rate: 5.522E-05 | global batch size: 256 | lm loss: 3.839622E+00 | grad norm: 0.442 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.689 | TFLOPs: 30.38 | 7: iteration 5350/ 7508 | consumed samples: 1369600 | consumed tokens: 2804940800 | elapsed time per iteration (s): 0.29 | learning rate: 5.492E-05 | global batch size: 256 | lm loss: 3.842306E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.825 | TFLOPs: 30.38 | 7: iteration 5360/ 7508 | consumed samples: 1372160 | consumed tokens: 2810183680 | elapsed time per iteration (s): 0.30 | learning rate: 5.462E-05 | global batch size: 256 | lm loss: 3.837685E+00 | grad norm: 0.434 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.540 | TFLOPs: 30.37 | 7: iteration 5370/ 7508 | consumed samples: 1374720 | consumed tokens: 2815426560 | elapsed time per iteration (s): 0.30 | learning rate: 5.432E-05 | global batch size: 256 | lm loss: 3.841267E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.586 | TFLOPs: 30.37 | 7: iteration 5380/ 7508 | consumed samples: 1377280 | consumed tokens: 2820669440 | elapsed time per iteration (s): 0.30 | learning rate: 5.402E-05 | global batch size: 256 | lm loss: 3.838758E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.780 | TFLOPs: 30.38 | 7: iteration 5390/ 7508 | consumed samples: 1379840 | consumed tokens: 2825912320 | elapsed time per iteration (s): 0.29 | learning rate: 5.373E-05 | global batch size: 256 | lm loss: 3.836586E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.993 | TFLOPs: 30.39 | 7: iteration 5400/ 7508 | consumed samples: 1382400 | consumed tokens: 2831155200 | elapsed time per iteration (s): 0.29 | learning rate: 5.343E-05 | global batch size: 256 | lm loss: 3.839700E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.224 | TFLOPs: 30.39 | 7: iteration 5410/ 7508 | consumed samples: 1384960 | consumed tokens: 2836398080 | elapsed time per iteration (s): 0.29 | learning rate: 5.313E-05 | global batch size: 256 | lm loss: 3.830160E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.980 | TFLOPs: 30.39 | 7: iteration 5420/ 7508 | consumed samples: 1387520 | consumed tokens: 2841640960 | elapsed time per iteration (s): 0.29 | learning rate: 5.284E-05 | global batch size: 256 | lm loss: 3.827401E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.377 | TFLOPs: 30.40 | 7: iteration 5430/ 7508 | consumed samples: 1390080 | consumed tokens: 2846883840 | elapsed time per iteration (s): 0.29 | learning rate: 5.255E-05 | global batch size: 256 | lm loss: 3.827856E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.267 | TFLOPs: 30.40 | 7: iteration 5440/ 7508 | consumed samples: 1392640 | consumed tokens: 2852126720 | elapsed time per iteration (s): 0.29 | learning rate: 5.225E-05 | global batch size: 256 | lm loss: 3.838923E+00 | grad norm: 0.444 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.374 | TFLOPs: 30.40 | 7: iteration 5450/ 7508 | consumed samples: 1395200 | consumed tokens: 2857369600 | elapsed time per iteration (s): 0.29 | learning rate: 5.196E-05 | global batch size: 256 | lm loss: 3.836755E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.291 | TFLOPs: 30.40 | 7: iteration 5460/ 7508 | consumed samples: 1397760 | consumed tokens: 2862612480 | elapsed time per iteration (s): 0.30 | learning rate: 5.167E-05 | global batch size: 256 | lm loss: 3.837990E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.683 | TFLOPs: 30.38 | 7: iteration 5470/ 7508 | consumed samples: 1400320 | consumed tokens: 2867855360 | elapsed time per iteration (s): 0.30 | learning rate: 5.138E-05 | global batch size: 256 | lm loss: 3.834159E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.996 | TFLOPs: 30.00 | 7: iteration 5480/ 7508 | consumed samples: 1402880 | consumed tokens: 2873098240 | elapsed time per iteration (s): 0.30 | learning rate: 5.109E-05 | global batch size: 256 | lm loss: 3.832235E+00 | grad norm: 0.432 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.709 | TFLOPs: 30.38 | 7: iteration 5490/ 7508 | consumed samples: 1405440 | consumed tokens: 2878341120 | elapsed time per iteration (s): 0.29 | learning rate: 5.081E-05 | global batch size: 256 | lm loss: 3.830524E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.253 | TFLOPs: 30.40 | 7: iteration 5500/ 7508 | consumed samples: 1408000 | consumed tokens: 2883584000 | elapsed time per iteration (s): 0.29 | learning rate: 5.052E-05 | global batch size: 256 | lm loss: 3.832827E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.010 | TFLOPs: 30.39 | 7: iteration 5510/ 7508 | consumed samples: 1410560 | consumed tokens: 2888826880 | elapsed time per iteration (s): 0.29 | learning rate: 5.024E-05 | global batch size: 256 | lm loss: 3.827189E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.988 | TFLOPs: 30.39 | 7: iteration 5520/ 7508 | consumed samples: 1413120 | consumed tokens: 2894069760 | elapsed time per iteration (s): 0.29 | learning rate: 4.995E-05 | global batch size: 256 | lm loss: 3.824293E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.510 | TFLOPs: 30.40 | 7: iteration 5530/ 7508 | consumed samples: 1415680 | consumed tokens: 2899312640 | elapsed time per iteration (s): 0.29 | learning rate: 4.967E-05 | global batch size: 256 | lm loss: 3.832300E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.924 | TFLOPs: 30.38 | 7: iteration 5540/ 7508 | consumed samples: 1418240 | consumed tokens: 2904555520 | elapsed time per iteration (s): 0.29 | learning rate: 4.939E-05 | global batch size: 256 | lm loss: 3.827230E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.809 | TFLOPs: 30.38 | 7: iteration 5550/ 7508 | consumed samples: 1420800 | consumed tokens: 2909798400 | elapsed time per iteration (s): 0.29 | learning rate: 4.911E-05 | global batch size: 256 | lm loss: 3.828082E+00 | grad norm: 0.435 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.968 | TFLOPs: 30.39 | 7: iteration 5560/ 7508 | consumed samples: 1423360 | consumed tokens: 2915041280 | elapsed time per iteration (s): 0.30 | learning rate: 4.883E-05 | global batch size: 256 | lm loss: 3.825312E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.767 | TFLOPs: 30.38 | 7: iteration 5570/ 7508 | consumed samples: 1425920 | consumed tokens: 2920284160 | elapsed time per iteration (s): 0.30 | learning rate: 4.855E-05 | global batch size: 256 | lm loss: 3.819066E+00 | grad norm: 0.474 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.986 | TFLOPs: 30.25 | 7: iteration 5580/ 7508 | consumed samples: 1428480 | consumed tokens: 2925527040 | elapsed time per iteration (s): 0.30 | learning rate: 4.827E-05 | global batch size: 256 | lm loss: 3.827602E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.619 | TFLOPs: 30.20 | 7: iteration 5590/ 7508 | consumed samples: 1431040 | consumed tokens: 2930769920 | elapsed time per iteration (s): 0.29 | learning rate: 4.800E-05 | global batch size: 256 | lm loss: 3.824907E+00 | grad norm: 0.381 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.306 | TFLOPs: 30.40 | 7: iteration 5600/ 7508 | consumed samples: 1433600 | consumed tokens: 2936012800 | elapsed time per iteration (s): 0.29 | learning rate: 4.772E-05 | global batch size: 256 | lm loss: 3.830309E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.438 | TFLOPs: 30.40 | 7: iteration 5610/ 7508 | consumed samples: 1436160 | consumed tokens: 2941255680 | elapsed time per iteration (s): 0.29 | learning rate: 4.745E-05 | global batch size: 256 | lm loss: 3.817647E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.906 | TFLOPs: 30.42 | 7: iteration 5620/ 7508 | consumed samples: 1438720 | consumed tokens: 2946498560 | elapsed time per iteration (s): 0.29 | learning rate: 4.717E-05 | global batch size: 256 | lm loss: 3.821662E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.479 | TFLOPs: 30.40 | 7: iteration 5630/ 7508 | consumed samples: 1441280 | consumed tokens: 2951741440 | elapsed time per iteration (s): 0.29 | learning rate: 4.690E-05 | global batch size: 256 | lm loss: 3.822964E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.112 | TFLOPs: 30.39 | 7: iteration 5640/ 7508 | consumed samples: 1443840 | consumed tokens: 2956984320 | elapsed time per iteration (s): 0.29 | learning rate: 4.663E-05 | global batch size: 256 | lm loss: 3.831390E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.776 | TFLOPs: 30.41 | 7: iteration 5650/ 7508 | consumed samples: 1446400 | consumed tokens: 2962227200 | elapsed time per iteration (s): 0.29 | learning rate: 4.636E-05 | global batch size: 256 | lm loss: 3.825721E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.042 | TFLOPs: 30.39 | 7: iteration 5660/ 7508 | consumed samples: 1448960 | consumed tokens: 2967470080 | elapsed time per iteration (s): 0.29 | learning rate: 4.609E-05 | global batch size: 256 | lm loss: 3.820078E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.845 | TFLOPs: 30.42 | 7: iteration 5670/ 7508 | consumed samples: 1451520 | consumed tokens: 2972712960 | elapsed time per iteration (s): 0.29 | learning rate: 4.583E-05 | global batch size: 256 | lm loss: 3.825195E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.015 | TFLOPs: 30.42 | 7: iteration 5680/ 7508 | consumed samples: 1454080 | consumed tokens: 2977955840 | elapsed time per iteration (s): 0.29 | learning rate: 4.556E-05 | global batch size: 256 | lm loss: 3.822446E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.927 | TFLOPs: 30.42 | 7: iteration 5690/ 7508 | consumed samples: 1456640 | consumed tokens: 2983198720 | elapsed time per iteration (s): 0.29 | learning rate: 4.530E-05 | global batch size: 256 | lm loss: 3.820526E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.787 | TFLOPs: 30.41 | 7: iteration 5700/ 7508 | consumed samples: 1459200 | consumed tokens: 2988441600 | elapsed time per iteration (s): 0.30 | learning rate: 4.503E-05 | global batch size: 256 | lm loss: 3.820288E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 848.085 | TFLOPs: 29.69 | 7: iteration 5710/ 7508 | consumed samples: 1461760 | consumed tokens: 2993684480 | elapsed time per iteration (s): 0.30 | learning rate: 4.477E-05 | global batch size: 256 | lm loss: 3.816790E+00 | grad norm: 0.447 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 863.906 | TFLOPs: 30.24 | 7: iteration 5720/ 7508 | consumed samples: 1464320 | consumed tokens: 2998927360 | elapsed time per iteration (s): 0.30 | learning rate: 4.451E-05 | global batch size: 256 | lm loss: 3.821676E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.499 | TFLOPs: 30.37 | 7: iteration 5730/ 7508 | consumed samples: 1466880 | consumed tokens: 3004170240 | elapsed time per iteration (s): 0.30 | learning rate: 4.425E-05 | global batch size: 256 | lm loss: 3.825365E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.681 | TFLOPs: 29.99 | 7: iteration 5740/ 7508 | consumed samples: 1469440 | consumed tokens: 3009413120 | elapsed time per iteration (s): 0.29 | learning rate: 4.399E-05 | global batch size: 256 | lm loss: 3.818682E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.404 | TFLOPs: 30.40 | 7: iteration 5750/ 7508 | consumed samples: 1472000 | consumed tokens: 3014656000 | elapsed time per iteration (s): 0.30 | learning rate: 4.373E-05 | global batch size: 256 | lm loss: 3.825562E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 850.392 | TFLOPs: 29.77 | 7: iteration 5760/ 7508 | consumed samples: 1474560 | consumed tokens: 3019898880 | elapsed time per iteration (s): 0.30 | learning rate: 4.347E-05 | global batch size: 256 | lm loss: 3.816441E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 861.491 | TFLOPs: 30.16 | 7: iteration 5770/ 7508 | consumed samples: 1477120 | consumed tokens: 3025141760 | elapsed time per iteration (s): 0.29 | learning rate: 4.322E-05 | global batch size: 256 | lm loss: 3.819814E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.026 | TFLOPs: 30.42 | 7: iteration 5780/ 7508 | consumed samples: 1479680 | consumed tokens: 3030384640 | elapsed time per iteration (s): 0.29 | learning rate: 4.296E-05 | global batch size: 256 | lm loss: 3.814405E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.432 | TFLOPs: 30.40 | 7: iteration 5790/ 7508 | consumed samples: 1482240 | consumed tokens: 3035627520 | elapsed time per iteration (s): 0.30 | learning rate: 4.271E-05 | global batch size: 256 | lm loss: 3.818248E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 845.445 | TFLOPs: 29.60 | 7: iteration 5800/ 7508 | consumed samples: 1484800 | consumed tokens: 3040870400 | elapsed time per iteration (s): 0.30 | learning rate: 4.246E-05 | global batch size: 256 | lm loss: 3.823798E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.999 | TFLOPs: 30.14 | 7: iteration 5810/ 7508 | consumed samples: 1487360 | consumed tokens: 3046113280 | elapsed time per iteration (s): 0.30 | learning rate: 4.221E-05 | global batch size: 256 | lm loss: 3.818040E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 851.951 | TFLOPs: 29.82 | 7: iteration 5820/ 7508 | consumed samples: 1489920 | consumed tokens: 3051356160 | elapsed time per iteration (s): 0.29 | learning rate: 4.196E-05 | global batch size: 256 | lm loss: 3.811944E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.764 | TFLOPs: 30.45 | 7: iteration 5830/ 7508 | consumed samples: 1492480 | consumed tokens: 3056599040 | elapsed time per iteration (s): 0.29 | learning rate: 4.171E-05 | global batch size: 256 | lm loss: 3.819344E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.193 | TFLOPs: 30.43 | 7: iteration 5840/ 7508 | consumed samples: 1495040 | consumed tokens: 3061841920 | elapsed time per iteration (s): 0.29 | learning rate: 4.146E-05 | global batch size: 256 | lm loss: 3.820520E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.201 | TFLOPs: 30.43 | 7: iteration 5850/ 7508 | consumed samples: 1497600 | consumed tokens: 3067084800 | elapsed time per iteration (s): 0.29 | learning rate: 4.122E-05 | global batch size: 256 | lm loss: 3.803566E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.371 | TFLOPs: 30.43 | 7: iteration 5860/ 7508 | consumed samples: 1500160 | consumed tokens: 3072327680 | elapsed time per iteration (s): 0.29 | learning rate: 4.097E-05 | global batch size: 256 | lm loss: 3.815086E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.185 | TFLOPs: 30.43 | 7: iteration 5870/ 7508 | consumed samples: 1502720 | consumed tokens: 3077570560 | elapsed time per iteration (s): 0.30 | learning rate: 4.073E-05 | global batch size: 256 | lm loss: 3.809892E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.301 | TFLOPs: 30.05 | 7: iteration 5880/ 7508 | consumed samples: 1505280 | consumed tokens: 3082813440 | elapsed time per iteration (s): 0.30 | learning rate: 4.049E-05 | global batch size: 256 | lm loss: 3.811229E+00 | grad norm: 0.383 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.896 | TFLOPs: 30.21 | 7: iteration 5890/ 7508 | consumed samples: 1507840 | consumed tokens: 3088056320 | elapsed time per iteration (s): 0.29 | learning rate: 4.025E-05 | global batch size: 256 | lm loss: 3.811837E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.902 | TFLOPs: 30.42 | 7: iteration 5900/ 7508 | consumed samples: 1510400 | consumed tokens: 3093299200 | elapsed time per iteration (s): 0.31 | learning rate: 4.001E-05 | global batch size: 256 | lm loss: 3.811555E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 835.843 | TFLOPs: 29.26 | 7: iteration 5910/ 7508 | consumed samples: 1512960 | consumed tokens: 3098542080 | elapsed time per iteration (s): 0.29 | learning rate: 3.977E-05 | global batch size: 256 | lm loss: 3.808471E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.456 | TFLOPs: 30.40 | 7: iteration 5920/ 7508 | consumed samples: 1515520 | consumed tokens: 3103784960 | elapsed time per iteration (s): 0.30 | learning rate: 3.953E-05 | global batch size: 256 | lm loss: 3.812763E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.263 | TFLOPs: 30.29 | 7: iteration 5930/ 7508 | consumed samples: 1518080 | consumed tokens: 3109027840 | elapsed time per iteration (s): 0.29 | learning rate: 3.929E-05 | global batch size: 256 | lm loss: 3.803513E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.553 | TFLOPs: 30.44 | 7: iteration 5940/ 7508 | consumed samples: 1520640 | consumed tokens: 3114270720 | elapsed time per iteration (s): 0.29 | learning rate: 3.906E-05 | global batch size: 256 | lm loss: 3.809089E+00 | grad norm: 0.426 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.437 | TFLOPs: 30.44 | 7: iteration 5950/ 7508 | consumed samples: 1523200 | consumed tokens: 3119513600 | elapsed time per iteration (s): 0.29 | learning rate: 3.883E-05 | global batch size: 256 | lm loss: 3.803717E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.248 | TFLOPs: 30.43 | 7: iteration 5960/ 7508 | consumed samples: 1525760 | consumed tokens: 3124756480 | elapsed time per iteration (s): 0.30 | learning rate: 3.859E-05 | global batch size: 256 | lm loss: 3.806246E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 861.861 | TFLOPs: 30.17 | 7: iteration 5970/ 7508 | consumed samples: 1528320 | consumed tokens: 3129999360 | elapsed time per iteration (s): 0.29 | learning rate: 3.836E-05 | global batch size: 256 | lm loss: 3.813621E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.808 | TFLOPs: 30.41 | 7: iteration 5980/ 7508 | consumed samples: 1530880 | consumed tokens: 3135242240 | elapsed time per iteration (s): 0.29 | learning rate: 3.813E-05 | global batch size: 256 | lm loss: 3.808087E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.929 | TFLOPs: 30.38 | 7: iteration 5990/ 7508 | consumed samples: 1533440 | consumed tokens: 3140485120 | elapsed time per iteration (s): 0.29 | learning rate: 3.790E-05 | global batch size: 256 | lm loss: 3.801117E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.273 | TFLOPs: 30.40 | 0: [2023-03-16 23:21:24,368] [INFO] [logging.py:68:log_dist] [Rank 0] step=6000, skipped=0, lr=[3.76774148080129e-05, 3.76774148080129e-05, 3.76774148080129e-05], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 6000/ 7508 | consumed samples: 1536000 | consumed tokens: 3145728000 | elapsed time per iteration (s): 0.29 | learning rate: 3.768E-05 | global batch size: 256 | lm loss: 3.810360E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.013 | TFLOPs: 30.39 | 0: steps: 6000 loss: 3.8181 iter time (s): 0.293 samples/sec: 872.333 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 6000 | lm loss value: 3.901920E+00 | lm loss PPL: 4.949739E+01 | 7: ----------------------------------------------------------------------------------------------- 7: iteration 6010/ 7508 | consumed samples: 1538560 | consumed tokens: 3150970880 | elapsed time per iteration (s): 0.31 | learning rate: 3.745E-05 | global batch size: 256 | lm loss: 3.806123E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 837.686 | TFLOPs: 29.33 | 7: iteration 6020/ 7508 | consumed samples: 1541120 | consumed tokens: 3156213760 | elapsed time per iteration (s): 0.29 | learning rate: 3.723E-05 | global batch size: 256 | lm loss: 3.807291E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.354 | TFLOPs: 30.43 | 7: iteration 6030/ 7508 | consumed samples: 1543680 | consumed tokens: 3161456640 | elapsed time per iteration (s): 0.29 | learning rate: 3.700E-05 | global batch size: 256 | lm loss: 3.805078E+00 | grad norm: 0.388 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.251 | TFLOPs: 30.43 | 7: iteration 6040/ 7508 | consumed samples: 1546240 | consumed tokens: 3166699520 | elapsed time per iteration (s): 0.29 | learning rate: 3.678E-05 | global batch size: 256 | lm loss: 3.809090E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.000 | TFLOPs: 30.42 | 7: iteration 6050/ 7508 | consumed samples: 1548800 | consumed tokens: 3171942400 | elapsed time per iteration (s): 0.29 | learning rate: 3.656E-05 | global batch size: 256 | lm loss: 3.800576E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.222 | TFLOPs: 30.43 | 7: iteration 6060/ 7508 | consumed samples: 1551360 | consumed tokens: 3177185280 | elapsed time per iteration (s): 0.29 | learning rate: 3.634E-05 | global batch size: 256 | lm loss: 3.803939E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.271 | TFLOPs: 30.43 | 7: iteration 6070/ 7508 | consumed samples: 1553920 | consumed tokens: 3182428160 | elapsed time per iteration (s): 0.29 | learning rate: 3.612E-05 | global batch size: 256 | lm loss: 3.805695E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.404 | TFLOPs: 30.44 | 7: iteration 6080/ 7508 | consumed samples: 1556480 | consumed tokens: 3187671040 | elapsed time per iteration (s): 0.29 | learning rate: 3.591E-05 | global batch size: 256 | lm loss: 3.807086E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.816 | TFLOPs: 30.41 | 7: iteration 6090/ 7508 | consumed samples: 1559040 | consumed tokens: 3192913920 | elapsed time per iteration (s): 0.29 | learning rate: 3.569E-05 | global batch size: 256 | lm loss: 3.803181E+00 | grad norm: 0.382 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.741 | TFLOPs: 30.41 | 7: iteration 6100/ 7508 | consumed samples: 1561600 | consumed tokens: 3198156800 | elapsed time per iteration (s): 0.29 | learning rate: 3.548E-05 | global batch size: 256 | lm loss: 3.794995E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.488 | TFLOPs: 30.40 | 7: iteration 6110/ 7508 | consumed samples: 1564160 | consumed tokens: 3203399680 | elapsed time per iteration (s): 0.29 | learning rate: 3.527E-05 | global batch size: 256 | lm loss: 3.807916E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.396 | TFLOPs: 30.40 | 7: iteration 6120/ 7508 | consumed samples: 1566720 | consumed tokens: 3208642560 | elapsed time per iteration (s): 0.29 | learning rate: 3.505E-05 | global batch size: 256 | lm loss: 3.808664E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.113 | TFLOPs: 30.39 | 7: iteration 6130/ 7508 | consumed samples: 1569280 | consumed tokens: 3213885440 | elapsed time per iteration (s): 0.29 | learning rate: 3.484E-05 | global batch size: 256 | lm loss: 3.805507E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.357 | TFLOPs: 30.40 | 7: iteration 6140/ 7508 | consumed samples: 1571840 | consumed tokens: 3219128320 | elapsed time per iteration (s): 0.29 | learning rate: 3.464E-05 | global batch size: 256 | lm loss: 3.808718E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.031 | TFLOPs: 30.39 | 7: iteration 6150/ 7508 | consumed samples: 1574400 | consumed tokens: 3224371200 | elapsed time per iteration (s): 0.29 | learning rate: 3.443E-05 | global batch size: 256 | lm loss: 3.803166E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.760 | TFLOPs: 30.41 | 7: iteration 6160/ 7508 | consumed samples: 1576960 | consumed tokens: 3229614080 | elapsed time per iteration (s): 0.29 | learning rate: 3.422E-05 | global batch size: 256 | lm loss: 3.797660E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.991 | TFLOPs: 30.42 | 7: iteration 6170/ 7508 | consumed samples: 1579520 | consumed tokens: 3234856960 | elapsed time per iteration (s): 0.29 | learning rate: 3.402E-05 | global batch size: 256 | lm loss: 3.793156E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.995 | TFLOPs: 30.42 | 7: iteration 6180/ 7508 | consumed samples: 1582080 | consumed tokens: 3240099840 | elapsed time per iteration (s): 0.29 | learning rate: 3.382E-05 | global batch size: 256 | lm loss: 3.804596E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.486 | TFLOPs: 30.40 | 7: iteration 6190/ 7508 | consumed samples: 1584640 | consumed tokens: 3245342720 | elapsed time per iteration (s): 0.29 | learning rate: 3.361E-05 | global batch size: 256 | lm loss: 3.799745E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.723 | TFLOPs: 30.41 | 7: iteration 6200/ 7508 | consumed samples: 1587200 | consumed tokens: 3250585600 | elapsed time per iteration (s): 0.29 | learning rate: 3.341E-05 | global batch size: 256 | lm loss: 3.795256E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.094 | TFLOPs: 30.39 | 7: iteration 6210/ 7508 | consumed samples: 1589760 | consumed tokens: 3255828480 | elapsed time per iteration (s): 0.29 | learning rate: 3.321E-05 | global batch size: 256 | lm loss: 3.808676E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.665 | TFLOPs: 30.41 | 7: iteration 6220/ 7508 | consumed samples: 1592320 | consumed tokens: 3261071360 | elapsed time per iteration (s): 0.29 | learning rate: 3.302E-05 | global batch size: 256 | lm loss: 3.803462E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.867 | TFLOPs: 30.42 | 7: iteration 6230/ 7508 | consumed samples: 1594880 | consumed tokens: 3266314240 | elapsed time per iteration (s): 0.30 | learning rate: 3.282E-05 | global batch size: 256 | lm loss: 3.796984E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.015 | TFLOPs: 30.11 | 7: iteration 6240/ 7508 | consumed samples: 1597440 | consumed tokens: 3271557120 | elapsed time per iteration (s): 0.30 | learning rate: 3.262E-05 | global batch size: 256 | lm loss: 3.803166E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 853.662 | TFLOPs: 29.88 | 7: iteration 6250/ 7508 | consumed samples: 1600000 | consumed tokens: 3276800000 | elapsed time per iteration (s): 0.29 | learning rate: 3.243E-05 | global batch size: 256 | lm loss: 3.795613E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.717 | TFLOPs: 30.45 | 7: iteration 6260/ 7508 | consumed samples: 1602560 | consumed tokens: 3282042880 | elapsed time per iteration (s): 0.29 | learning rate: 3.224E-05 | global batch size: 256 | lm loss: 3.801330E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.471 | TFLOPs: 30.40 | 7: iteration 6270/ 7508 | consumed samples: 1605120 | consumed tokens: 3287285760 | elapsed time per iteration (s): 0.29 | learning rate: 3.205E-05 | global batch size: 256 | lm loss: 3.798146E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.885 | TFLOPs: 30.42 | 7: iteration 6280/ 7508 | consumed samples: 1607680 | consumed tokens: 3292528640 | elapsed time per iteration (s): 0.29 | learning rate: 3.186E-05 | global batch size: 256 | lm loss: 3.800270E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.758 | TFLOPs: 30.41 | 7: iteration 6290/ 7508 | consumed samples: 1610240 | consumed tokens: 3297771520 | elapsed time per iteration (s): 0.29 | learning rate: 3.167E-05 | global batch size: 256 | lm loss: 3.797126E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.068 | TFLOPs: 30.42 | 7: iteration 6300/ 7508 | consumed samples: 1612800 | consumed tokens: 3303014400 | elapsed time per iteration (s): 0.29 | learning rate: 3.148E-05 | global batch size: 256 | lm loss: 3.794486E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.780 | TFLOPs: 30.41 | 7: iteration 6310/ 7508 | consumed samples: 1615360 | consumed tokens: 3308257280 | elapsed time per iteration (s): 0.29 | learning rate: 3.130E-05 | global batch size: 256 | lm loss: 3.790299E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.571 | TFLOPs: 30.41 | 7: iteration 6320/ 7508 | consumed samples: 1617920 | consumed tokens: 3313500160 | elapsed time per iteration (s): 0.29 | learning rate: 3.112E-05 | global batch size: 256 | lm loss: 3.800546E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.234 | TFLOPs: 30.39 | 7: iteration 6330/ 7508 | consumed samples: 1620480 | consumed tokens: 3318743040 | elapsed time per iteration (s): 0.29 | learning rate: 3.093E-05 | global batch size: 256 | lm loss: 3.796576E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.407 | TFLOPs: 30.40 | 7: iteration 6340/ 7508 | consumed samples: 1623040 | consumed tokens: 3323985920 | elapsed time per iteration (s): 0.29 | learning rate: 3.075E-05 | global batch size: 256 | lm loss: 3.804327E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.227 | TFLOPs: 30.39 | 7: iteration 6350/ 7508 | consumed samples: 1625600 | consumed tokens: 3329228800 | elapsed time per iteration (s): 0.29 | learning rate: 3.057E-05 | global batch size: 256 | lm loss: 3.793572E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.147 | TFLOPs: 30.39 | 7: iteration 6360/ 7508 | consumed samples: 1628160 | consumed tokens: 3334471680 | elapsed time per iteration (s): 0.29 | learning rate: 3.039E-05 | global batch size: 256 | lm loss: 3.799318E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.900 | TFLOPs: 30.42 | 7: iteration 6370/ 7508 | consumed samples: 1630720 | consumed tokens: 3339714560 | elapsed time per iteration (s): 0.29 | learning rate: 3.022E-05 | global batch size: 256 | lm loss: 3.801453E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.078 | TFLOPs: 30.42 | 7: iteration 6380/ 7508 | consumed samples: 1633280 | consumed tokens: 3344957440 | elapsed time per iteration (s): 0.29 | learning rate: 3.004E-05 | global batch size: 256 | lm loss: 3.794503E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.072 | TFLOPs: 30.42 | 7: iteration 6390/ 7508 | consumed samples: 1635840 | consumed tokens: 3350200320 | elapsed time per iteration (s): 0.29 | learning rate: 2.987E-05 | global batch size: 256 | lm loss: 3.794837E+00 | grad norm: 0.409 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.592 | TFLOPs: 30.41 | 7: iteration 6400/ 7508 | consumed samples: 1638400 | consumed tokens: 3355443200 | elapsed time per iteration (s): 0.30 | learning rate: 2.970E-05 | global batch size: 256 | lm loss: 3.787638E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.751 | TFLOPs: 30.06 | 7: iteration 6410/ 7508 | consumed samples: 1640960 | consumed tokens: 3360686080 | elapsed time per iteration (s): 0.29 | learning rate: 2.952E-05 | global batch size: 256 | lm loss: 3.789117E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.009 | TFLOPs: 30.42 | 7: iteration 6420/ 7508 | consumed samples: 1643520 | consumed tokens: 3365928960 | elapsed time per iteration (s): 0.29 | learning rate: 2.936E-05 | global batch size: 256 | lm loss: 3.794703E+00 | grad norm: 0.382 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.113 | TFLOPs: 30.43 | 7: iteration 6430/ 7508 | consumed samples: 1646080 | consumed tokens: 3371171840 | elapsed time per iteration (s): 0.30 | learning rate: 2.919E-05 | global batch size: 256 | lm loss: 3.795289E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 856.618 | TFLOPs: 29.99 | 7: iteration 6440/ 7508 | consumed samples: 1648640 | consumed tokens: 3376414720 | elapsed time per iteration (s): 0.29 | learning rate: 2.902E-05 | global batch size: 256 | lm loss: 3.793467E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.785 | TFLOPs: 30.41 | 7: iteration 6450/ 7508 | consumed samples: 1651200 | consumed tokens: 3381657600 | elapsed time per iteration (s): 0.29 | learning rate: 2.886E-05 | global batch size: 256 | lm loss: 3.802374E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.439 | TFLOPs: 30.40 | 7: iteration 6460/ 7508 | consumed samples: 1653760 | consumed tokens: 3386900480 | elapsed time per iteration (s): 0.29 | learning rate: 2.869E-05 | global batch size: 256 | lm loss: 3.795906E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.547 | TFLOPs: 30.41 | 7: iteration 6470/ 7508 | consumed samples: 1656320 | consumed tokens: 3392143360 | elapsed time per iteration (s): 0.29 | learning rate: 2.853E-05 | global batch size: 256 | lm loss: 3.786533E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.056 | TFLOPs: 30.42 | 7: iteration 6480/ 7508 | consumed samples: 1658880 | consumed tokens: 3397386240 | elapsed time per iteration (s): 0.29 | learning rate: 2.837E-05 | global batch size: 256 | lm loss: 3.787733E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.656 | TFLOPs: 30.41 | 7: iteration 6490/ 7508 | consumed samples: 1661440 | consumed tokens: 3402629120 | elapsed time per iteration (s): 0.29 | learning rate: 2.821E-05 | global batch size: 256 | lm loss: 3.801614E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.961 | TFLOPs: 30.42 | 7: iteration 6500/ 7508 | consumed samples: 1664000 | consumed tokens: 3407872000 | elapsed time per iteration (s): 0.29 | learning rate: 2.805E-05 | global batch size: 256 | lm loss: 3.785563E+00 | grad norm: 0.378 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.352 | TFLOPs: 30.43 | 7: iteration 6510/ 7508 | consumed samples: 1666560 | consumed tokens: 3413114880 | elapsed time per iteration (s): 0.30 | learning rate: 2.789E-05 | global batch size: 256 | lm loss: 3.794909E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 852.651 | TFLOPs: 29.85 | 7: iteration 6520/ 7508 | consumed samples: 1669120 | consumed tokens: 3418357760 | elapsed time per iteration (s): 0.29 | learning rate: 2.774E-05 | global batch size: 256 | lm loss: 3.792338E+00 | grad norm: 0.418 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.872 | TFLOPs: 30.42 | 7: iteration 6530/ 7508 | consumed samples: 1671680 | consumed tokens: 3423600640 | elapsed time per iteration (s): 0.29 | learning rate: 2.759E-05 | global batch size: 256 | lm loss: 3.792043E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.999 | TFLOPs: 30.42 | 7: iteration 6540/ 7508 | consumed samples: 1674240 | consumed tokens: 3428843520 | elapsed time per iteration (s): 0.29 | learning rate: 2.743E-05 | global batch size: 256 | lm loss: 3.792202E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.420 | TFLOPs: 30.40 | 7: iteration 6550/ 7508 | consumed samples: 1676800 | consumed tokens: 3434086400 | elapsed time per iteration (s): 0.29 | learning rate: 2.728E-05 | global batch size: 256 | lm loss: 3.786329E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.758 | TFLOPs: 30.41 | 7: iteration 6560/ 7508 | consumed samples: 1679360 | consumed tokens: 3439329280 | elapsed time per iteration (s): 0.29 | learning rate: 2.713E-05 | global batch size: 256 | lm loss: 3.788938E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.446 | TFLOPs: 30.44 | 7: iteration 6570/ 7508 | consumed samples: 1681920 | consumed tokens: 3444572160 | elapsed time per iteration (s): 0.29 | learning rate: 2.699E-05 | global batch size: 256 | lm loss: 3.789277E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.152 | TFLOPs: 30.43 | 7: iteration 6580/ 7508 | consumed samples: 1684480 | consumed tokens: 3449815040 | elapsed time per iteration (s): 0.30 | learning rate: 2.684E-05 | global batch size: 256 | lm loss: 3.788308E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.088 | TFLOPs: 30.04 | 7: iteration 6590/ 7508 | consumed samples: 1687040 | consumed tokens: 3455057920 | elapsed time per iteration (s): 0.29 | learning rate: 2.669E-05 | global batch size: 256 | lm loss: 3.792664E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.996 | TFLOPs: 30.42 | 7: iteration 6600/ 7508 | consumed samples: 1689600 | consumed tokens: 3460300800 | elapsed time per iteration (s): 0.29 | learning rate: 2.655E-05 | global batch size: 256 | lm loss: 3.789325E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.463 | TFLOPs: 30.44 | 7: iteration 6610/ 7508 | consumed samples: 1692160 | consumed tokens: 3465543680 | elapsed time per iteration (s): 0.29 | learning rate: 2.641E-05 | global batch size: 256 | lm loss: 3.784665E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.468 | TFLOPs: 30.44 | 7: iteration 6620/ 7508 | consumed samples: 1694720 | consumed tokens: 3470786560 | elapsed time per iteration (s): 0.29 | learning rate: 2.627E-05 | global batch size: 256 | lm loss: 3.788924E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.401 | TFLOPs: 30.44 | 7: iteration 6630/ 7508 | consumed samples: 1697280 | consumed tokens: 3476029440 | elapsed time per iteration (s): 0.29 | learning rate: 2.613E-05 | global batch size: 256 | lm loss: 3.787251E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.182 | TFLOPs: 30.43 | 7: iteration 6640/ 7508 | consumed samples: 1699840 | consumed tokens: 3481272320 | elapsed time per iteration (s): 0.29 | learning rate: 2.599E-05 | global batch size: 256 | lm loss: 3.784700E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.322 | TFLOPs: 30.43 | 7: iteration 6650/ 7508 | consumed samples: 1702400 | consumed tokens: 3486515200 | elapsed time per iteration (s): 0.29 | learning rate: 2.586E-05 | global batch size: 256 | lm loss: 3.782903E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.253 | TFLOPs: 30.43 | 7: iteration 6660/ 7508 | consumed samples: 1704960 | consumed tokens: 3491758080 | elapsed time per iteration (s): 0.29 | learning rate: 2.572E-05 | global batch size: 256 | lm loss: 3.785315E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.285 | TFLOPs: 30.43 | 7: iteration 6670/ 7508 | consumed samples: 1707520 | consumed tokens: 3497000960 | elapsed time per iteration (s): 0.29 | learning rate: 2.559E-05 | global batch size: 256 | lm loss: 3.785366E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.108 | TFLOPs: 30.43 | 7: iteration 6680/ 7508 | consumed samples: 1710080 | consumed tokens: 3502243840 | elapsed time per iteration (s): 0.29 | learning rate: 2.546E-05 | global batch size: 256 | lm loss: 3.782117E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.309 | TFLOPs: 30.43 | 7: iteration 6690/ 7508 | consumed samples: 1712640 | consumed tokens: 3507486720 | elapsed time per iteration (s): 0.29 | learning rate: 2.533E-05 | global batch size: 256 | lm loss: 3.784559E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.190 | TFLOPs: 30.43 | 7: iteration 6700/ 7508 | consumed samples: 1715200 | consumed tokens: 3512729600 | elapsed time per iteration (s): 0.29 | learning rate: 2.520E-05 | global batch size: 256 | lm loss: 3.784647E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.841 | TFLOPs: 30.42 | 7: iteration 6710/ 7508 | consumed samples: 1717760 | consumed tokens: 3517972480 | elapsed time per iteration (s): 0.29 | learning rate: 2.508E-05 | global batch size: 256 | lm loss: 3.786370E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.818 | TFLOPs: 30.41 | 7: iteration 6720/ 7508 | consumed samples: 1720320 | consumed tokens: 3523215360 | elapsed time per iteration (s): 0.29 | learning rate: 2.495E-05 | global batch size: 256 | lm loss: 3.787416E+00 | grad norm: 0.378 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.236 | TFLOPs: 30.43 | 7: iteration 6730/ 7508 | consumed samples: 1722880 | consumed tokens: 3528458240 | elapsed time per iteration (s): 0.29 | learning rate: 2.483E-05 | global batch size: 256 | lm loss: 3.771371E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.664 | TFLOPs: 30.44 | 7: iteration 6740/ 7508 | consumed samples: 1725440 | consumed tokens: 3533701120 | elapsed time per iteration (s): 0.29 | learning rate: 2.470E-05 | global batch size: 256 | lm loss: 3.785756E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.028 | TFLOPs: 30.46 | 7: iteration 6750/ 7508 | consumed samples: 1728000 | consumed tokens: 3538944000 | elapsed time per iteration (s): 0.29 | learning rate: 2.458E-05 | global batch size: 256 | lm loss: 3.783387E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.235 | TFLOPs: 30.43 | 7: iteration 6760/ 7508 | consumed samples: 1730560 | consumed tokens: 3544186880 | elapsed time per iteration (s): 0.29 | learning rate: 2.446E-05 | global batch size: 256 | lm loss: 3.785186E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.869 | TFLOPs: 30.45 | 7: iteration 6770/ 7508 | consumed samples: 1733120 | consumed tokens: 3549429760 | elapsed time per iteration (s): 0.29 | learning rate: 2.435E-05 | global batch size: 256 | lm loss: 3.784602E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 870.123 | TFLOPs: 30.46 | 7: iteration 6780/ 7508 | consumed samples: 1735680 | consumed tokens: 3554672640 | elapsed time per iteration (s): 0.29 | learning rate: 2.423E-05 | global batch size: 256 | lm loss: 3.779955E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.845 | TFLOPs: 30.42 | 7: iteration 6790/ 7508 | consumed samples: 1738240 | consumed tokens: 3559915520 | elapsed time per iteration (s): 0.29 | learning rate: 2.412E-05 | global batch size: 256 | lm loss: 3.789326E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.392 | TFLOPs: 30.43 | 7: iteration 6800/ 7508 | consumed samples: 1740800 | consumed tokens: 3565158400 | elapsed time per iteration (s): 0.29 | learning rate: 2.400E-05 | global batch size: 256 | lm loss: 3.787194E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.977 | TFLOPs: 30.39 | 7: iteration 6810/ 7508 | consumed samples: 1743360 | consumed tokens: 3570401280 | elapsed time per iteration (s): 0.29 | learning rate: 2.389E-05 | global batch size: 256 | lm loss: 3.786616E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.663 | TFLOPs: 30.41 | 7: iteration 6820/ 7508 | consumed samples: 1745920 | consumed tokens: 3575644160 | elapsed time per iteration (s): 0.29 | learning rate: 2.378E-05 | global batch size: 256 | lm loss: 3.779247E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.422 | TFLOPs: 30.44 | 7: iteration 6830/ 7508 | consumed samples: 1748480 | consumed tokens: 3580887040 | elapsed time per iteration (s): 0.29 | learning rate: 2.367E-05 | global batch size: 256 | lm loss: 3.784989E+00 | grad norm: 0.373 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.506 | TFLOPs: 30.44 | 7: iteration 6840/ 7508 | consumed samples: 1751040 | consumed tokens: 3586129920 | elapsed time per iteration (s): 0.29 | learning rate: 2.357E-05 | global batch size: 256 | lm loss: 3.784497E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.320 | TFLOPs: 30.43 | 7: iteration 6850/ 7508 | consumed samples: 1753600 | consumed tokens: 3591372800 | elapsed time per iteration (s): 0.29 | learning rate: 2.346E-05 | global batch size: 256 | lm loss: 3.781947E+00 | grad norm: 0.401 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.001 | TFLOPs: 30.42 | 7: iteration 6860/ 7508 | consumed samples: 1756160 | consumed tokens: 3596615680 | elapsed time per iteration (s): 0.29 | learning rate: 2.336E-05 | global batch size: 256 | lm loss: 3.779704E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.720 | TFLOPs: 30.41 | 7: iteration 6870/ 7508 | consumed samples: 1758720 | consumed tokens: 3601858560 | elapsed time per iteration (s): 0.29 | learning rate: 2.326E-05 | global batch size: 256 | lm loss: 3.778200E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.891 | TFLOPs: 30.42 | 7: iteration 6880/ 7508 | consumed samples: 1761280 | consumed tokens: 3607101440 | elapsed time per iteration (s): 0.29 | learning rate: 2.316E-05 | global batch size: 256 | lm loss: 3.785155E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.905 | TFLOPs: 30.42 | 7: iteration 6890/ 7508 | consumed samples: 1763840 | consumed tokens: 3612344320 | elapsed time per iteration (s): 0.29 | learning rate: 2.306E-05 | global batch size: 256 | lm loss: 3.769931E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.743 | TFLOPs: 30.41 | 7: iteration 6900/ 7508 | consumed samples: 1766400 | consumed tokens: 3617587200 | elapsed time per iteration (s): 0.29 | learning rate: 2.296E-05 | global batch size: 256 | lm loss: 3.778264E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.781 | TFLOPs: 30.41 | 7: iteration 6910/ 7508 | consumed samples: 1768960 | consumed tokens: 3622830080 | elapsed time per iteration (s): 0.29 | learning rate: 2.286E-05 | global batch size: 256 | lm loss: 3.782406E+00 | grad norm: 0.388 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.142 | TFLOPs: 30.43 | 7: iteration 6920/ 7508 | consumed samples: 1771520 | consumed tokens: 3628072960 | elapsed time per iteration (s): 0.29 | learning rate: 2.277E-05 | global batch size: 256 | lm loss: 3.779286E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.463 | TFLOPs: 30.44 | 7: iteration 6930/ 7508 | consumed samples: 1774080 | consumed tokens: 3633315840 | elapsed time per iteration (s): 0.29 | learning rate: 2.268E-05 | global batch size: 256 | lm loss: 3.782230E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.910 | TFLOPs: 30.45 | 7: iteration 6940/ 7508 | consumed samples: 1776640 | consumed tokens: 3638558720 | elapsed time per iteration (s): 0.29 | learning rate: 2.258E-05 | global batch size: 256 | lm loss: 3.783708E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.348 | TFLOPs: 30.43 | 7: iteration 6950/ 7508 | consumed samples: 1779200 | consumed tokens: 3643801600 | elapsed time per iteration (s): 0.29 | learning rate: 2.249E-05 | global batch size: 256 | lm loss: 3.780103E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.580 | TFLOPs: 30.44 | 7: iteration 6960/ 7508 | consumed samples: 1781760 | consumed tokens: 3649044480 | elapsed time per iteration (s): 0.29 | learning rate: 2.241E-05 | global batch size: 256 | lm loss: 3.782396E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.566 | TFLOPs: 30.44 | 7: iteration 6970/ 7508 | consumed samples: 1784320 | consumed tokens: 3654287360 | elapsed time per iteration (s): 0.29 | learning rate: 2.232E-05 | global batch size: 256 | lm loss: 3.781147E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.053 | TFLOPs: 30.42 | 7: iteration 6980/ 7508 | consumed samples: 1786880 | consumed tokens: 3659530240 | elapsed time per iteration (s): 0.29 | learning rate: 2.223E-05 | global batch size: 256 | lm loss: 3.781313E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.008 | TFLOPs: 30.42 | 7: iteration 6990/ 7508 | consumed samples: 1789440 | consumed tokens: 3664773120 | elapsed time per iteration (s): 0.29 | learning rate: 2.215E-05 | global batch size: 256 | lm loss: 3.778312E+00 | grad norm: 0.387 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.200 | TFLOPs: 30.43 | 7: iteration 7000/ 7508 | consumed samples: 1792000 | consumed tokens: 3670016000 | elapsed time per iteration (s): 0.29 | learning rate: 2.207E-05 | global batch size: 256 | lm loss: 3.777501E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.772 | TFLOPs: 30.41 | 7: ----------------------------------------------------------------------------------------------- 7: validation loss at iteration 7000 | lm loss value: 3.895399E+00 | lm loss PPL: 4.917565E+01 | 7: ----------------------------------------------------------------------------------------------- 7: iteration 7010/ 7508 | consumed samples: 1794560 | consumed tokens: 3675258880 | elapsed time per iteration (s): 0.30 | learning rate: 2.199E-05 | global batch size: 256 | lm loss: 3.777271E+00 | grad norm: 0.388 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 842.556 | TFLOPs: 29.50 | 7: iteration 7020/ 7508 | consumed samples: 1797120 | consumed tokens: 3680501760 | elapsed time per iteration (s): 0.29 | learning rate: 2.191E-05 | global batch size: 256 | lm loss: 3.785970E+00 | grad norm: 0.389 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.674 | TFLOPs: 30.41 | 7: iteration 7030/ 7508 | consumed samples: 1799680 | consumed tokens: 3685744640 | elapsed time per iteration (s): 0.29 | learning rate: 2.183E-05 | global batch size: 256 | lm loss: 3.778436E+00 | grad norm: 0.420 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.493 | TFLOPs: 30.40 | 7: iteration 7040/ 7508 | consumed samples: 1802240 | consumed tokens: 3690987520 | elapsed time per iteration (s): 0.29 | learning rate: 2.176E-05 | global batch size: 256 | lm loss: 3.780607E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.386 | TFLOPs: 30.40 | 7: iteration 7050/ 7508 | consumed samples: 1804800 | consumed tokens: 3696230400 | elapsed time per iteration (s): 0.29 | learning rate: 2.168E-05 | global batch size: 256 | lm loss: 3.780933E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.493 | TFLOPs: 30.40 | 7: iteration 7060/ 7508 | consumed samples: 1807360 | consumed tokens: 3701473280 | elapsed time per iteration (s): 0.29 | learning rate: 2.161E-05 | global batch size: 256 | lm loss: 3.779740E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.448 | TFLOPs: 30.40 | 7: iteration 7070/ 7508 | consumed samples: 1809920 | consumed tokens: 3706716160 | elapsed time per iteration (s): 0.30 | learning rate: 2.154E-05 | global batch size: 256 | lm loss: 3.771884E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 859.073 | TFLOPs: 30.07 | 7: iteration 7080/ 7508 | consumed samples: 1812480 | consumed tokens: 3711959040 | elapsed time per iteration (s): 0.30 | learning rate: 2.147E-05 | global batch size: 256 | lm loss: 3.780689E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.743 | TFLOPs: 30.03 | 7: iteration 7090/ 7508 | consumed samples: 1815040 | consumed tokens: 3717201920 | elapsed time per iteration (s): 0.30 | learning rate: 2.140E-05 | global batch size: 256 | lm loss: 3.785564E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 857.141 | TFLOPs: 30.01 | 7: iteration 7100/ 7508 | consumed samples: 1817600 | consumed tokens: 3722444800 | elapsed time per iteration (s): 0.29 | learning rate: 2.134E-05 | global batch size: 256 | lm loss: 3.780442E+00 | grad norm: 0.389 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.052 | TFLOPs: 30.42 | 7: iteration 7110/ 7508 | consumed samples: 1820160 | consumed tokens: 3727687680 | elapsed time per iteration (s): 0.29 | learning rate: 2.127E-05 | global batch size: 256 | lm loss: 3.780479E+00 | grad norm: 0.383 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.152 | TFLOPs: 30.39 | 7: iteration 7120/ 7508 | consumed samples: 1822720 | consumed tokens: 3732930560 | elapsed time per iteration (s): 0.29 | learning rate: 2.121E-05 | global batch size: 256 | lm loss: 3.782121E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.749 | TFLOPs: 30.41 | 7: iteration 7130/ 7508 | consumed samples: 1825280 | consumed tokens: 3738173440 | elapsed time per iteration (s): 0.30 | learning rate: 2.115E-05 | global batch size: 256 | lm loss: 3.774414E+00 | grad norm: 0.388 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 865.716 | TFLOPs: 30.31 | 7: iteration 7140/ 7508 | consumed samples: 1827840 | consumed tokens: 3743416320 | elapsed time per iteration (s): 0.29 | learning rate: 2.109E-05 | global batch size: 256 | lm loss: 3.769779E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.257 | TFLOPs: 30.40 | 7: iteration 7150/ 7508 | consumed samples: 1830400 | consumed tokens: 3748659200 | elapsed time per iteration (s): 0.30 | learning rate: 2.103E-05 | global batch size: 256 | lm loss: 3.776224E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 849.121 | TFLOPs: 29.73 | 7: iteration 7160/ 7508 | consumed samples: 1832960 | consumed tokens: 3753902080 | elapsed time per iteration (s): 0.30 | learning rate: 2.097E-05 | global batch size: 256 | lm loss: 3.776356E+00 | grad norm: 0.422 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 860.572 | TFLOPs: 30.13 | 7: iteration 7170/ 7508 | consumed samples: 1835520 | consumed tokens: 3759144960 | elapsed time per iteration (s): 0.30 | learning rate: 2.092E-05 | global batch size: 256 | lm loss: 3.780530E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 854.916 | TFLOPs: 29.93 | 7: iteration 7180/ 7508 | consumed samples: 1838080 | consumed tokens: 3764387840 | elapsed time per iteration (s): 0.29 | learning rate: 2.087E-05 | global batch size: 256 | lm loss: 3.769138E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.111 | TFLOPs: 30.43 | 7: iteration 7190/ 7508 | consumed samples: 1840640 | consumed tokens: 3769630720 | elapsed time per iteration (s): 0.30 | learning rate: 2.081E-05 | global batch size: 256 | lm loss: 3.774704E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 858.500 | TFLOPs: 30.05 | 7: iteration 7200/ 7508 | consumed samples: 1843200 | consumed tokens: 3774873600 | elapsed time per iteration (s): 0.29 | learning rate: 2.076E-05 | global batch size: 256 | lm loss: 3.780930E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.171 | TFLOPs: 30.43 | 7: iteration 7210/ 7508 | consumed samples: 1845760 | consumed tokens: 3780116480 | elapsed time per iteration (s): 0.30 | learning rate: 2.071E-05 | global batch size: 256 | lm loss: 3.776530E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 861.638 | TFLOPs: 30.16 | 7: iteration 7220/ 7508 | consumed samples: 1848320 | consumed tokens: 3785359360 | elapsed time per iteration (s): 0.30 | learning rate: 2.067E-05 | global batch size: 256 | lm loss: 3.773000E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 850.759 | TFLOPs: 29.78 | 7: iteration 7230/ 7508 | consumed samples: 1850880 | consumed tokens: 3790602240 | elapsed time per iteration (s): 0.29 | learning rate: 2.062E-05 | global batch size: 256 | lm loss: 3.773567E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 869.777 | TFLOPs: 30.45 | 7: iteration 7240/ 7508 | consumed samples: 1853440 | consumed tokens: 3795845120 | elapsed time per iteration (s): 0.30 | learning rate: 2.058E-05 | global batch size: 256 | lm loss: 3.765873E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 844.815 | TFLOPs: 29.57 | 7: iteration 7250/ 7508 | consumed samples: 1856000 | consumed tokens: 3801088000 | elapsed time per iteration (s): 0.30 | learning rate: 2.054E-05 | global batch size: 256 | lm loss: 3.776866E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 862.887 | TFLOPs: 30.21 | 7: iteration 7260/ 7508 | consumed samples: 1858560 | consumed tokens: 3806330880 | elapsed time per iteration (s): 0.30 | learning rate: 2.050E-05 | global batch size: 256 | lm loss: 3.771673E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.352 | TFLOPs: 30.36 | 7: iteration 7270/ 7508 | consumed samples: 1861120 | consumed tokens: 3811573760 | elapsed time per iteration (s): 0.30 | learning rate: 2.046E-05 | global batch size: 256 | lm loss: 3.775433E+00 | grad norm: 0.406 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.634 | TFLOPs: 30.34 | 7: iteration 7280/ 7508 | consumed samples: 1863680 | consumed tokens: 3816816640 | elapsed time per iteration (s): 0.30 | learning rate: 2.042E-05 | global batch size: 256 | lm loss: 3.778728E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.352 | TFLOPs: 30.33 | 7: iteration 7290/ 7508 | consumed samples: 1866240 | consumed tokens: 3822059520 | elapsed time per iteration (s): 0.30 | learning rate: 2.038E-05 | global batch size: 256 | lm loss: 3.775388E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.168 | TFLOPs: 30.32 | 7: iteration 7300/ 7508 | consumed samples: 1868800 | consumed tokens: 3827302400 | elapsed time per iteration (s): 0.30 | learning rate: 2.035E-05 | global batch size: 256 | lm loss: 3.773478E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.135 | TFLOPs: 30.32 | 7: iteration 7310/ 7508 | consumed samples: 1871360 | consumed tokens: 3832545280 | elapsed time per iteration (s): 0.30 | learning rate: 2.032E-05 | global batch size: 256 | lm loss: 3.766857E+00 | grad norm: 0.411 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.000 | TFLOPs: 30.32 | 7: iteration 7320/ 7508 | consumed samples: 1873920 | consumed tokens: 3837788160 | elapsed time per iteration (s): 0.30 | learning rate: 2.029E-05 | global batch size: 256 | lm loss: 3.781172E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.681 | TFLOPs: 30.34 | 7: iteration 7330/ 7508 | consumed samples: 1876480 | consumed tokens: 3843031040 | elapsed time per iteration (s): 0.30 | learning rate: 2.026E-05 | global batch size: 256 | lm loss: 3.777355E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 854.609 | TFLOPs: 29.92 | 7: iteration 7340/ 7508 | consumed samples: 1879040 | consumed tokens: 3848273920 | elapsed time per iteration (s): 0.30 | learning rate: 2.023E-05 | global batch size: 256 | lm loss: 3.767572E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.227 | TFLOPs: 29.94 | 7: iteration 7350/ 7508 | consumed samples: 1881600 | consumed tokens: 3853516800 | elapsed time per iteration (s): 0.30 | learning rate: 2.020E-05 | global batch size: 256 | lm loss: 3.770042E+00 | grad norm: 0.397 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 855.389 | TFLOPs: 29.94 | 7: iteration 7360/ 7508 | consumed samples: 1884160 | consumed tokens: 3858759680 | elapsed time per iteration (s): 0.30 | learning rate: 2.018E-05 | global batch size: 256 | lm loss: 3.775648E+00 | grad norm: 0.391 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.121 | TFLOPs: 30.32 | 7: iteration 7370/ 7508 | consumed samples: 1886720 | consumed tokens: 3864002560 | elapsed time per iteration (s): 0.30 | learning rate: 2.015E-05 | global batch size: 256 | lm loss: 3.768797E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.575 | TFLOPs: 30.37 | 7: iteration 7380/ 7508 | consumed samples: 1889280 | consumed tokens: 3869245440 | elapsed time per iteration (s): 0.30 | learning rate: 2.013E-05 | global batch size: 256 | lm loss: 3.769415E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.410 | TFLOPs: 30.37 | 7: iteration 7390/ 7508 | consumed samples: 1891840 | consumed tokens: 3874488320 | elapsed time per iteration (s): 0.30 | learning rate: 2.011E-05 | global batch size: 256 | lm loss: 3.762183E+00 | grad norm: 0.400 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.383 | TFLOPs: 30.36 | 7: iteration 7400/ 7508 | consumed samples: 1894400 | consumed tokens: 3879731200 | elapsed time per iteration (s): 0.30 | learning rate: 2.009E-05 | global batch size: 256 | lm loss: 3.772578E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.564 | TFLOPs: 30.34 | 7: iteration 7410/ 7508 | consumed samples: 1896960 | consumed tokens: 3884974080 | elapsed time per iteration (s): 0.30 | learning rate: 2.008E-05 | global batch size: 256 | lm loss: 3.766514E+00 | grad norm: 0.413 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.779 | TFLOPs: 30.34 | 7: iteration 7420/ 7508 | consumed samples: 1899520 | consumed tokens: 3890216960 | elapsed time per iteration (s): 0.30 | learning rate: 2.006E-05 | global batch size: 256 | lm loss: 3.775076E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.547 | TFLOPs: 30.37 | 7: iteration 7430/ 7508 | consumed samples: 1902080 | consumed tokens: 3895459840 | elapsed time per iteration (s): 0.30 | learning rate: 2.005E-05 | global batch size: 256 | lm loss: 3.765935E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.918 | TFLOPs: 30.35 | 7: iteration 7440/ 7508 | consumed samples: 1904640 | consumed tokens: 3900702720 | elapsed time per iteration (s): 0.30 | learning rate: 2.004E-05 | global batch size: 256 | lm loss: 3.772635E+00 | grad norm: 0.405 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.956 | TFLOPs: 30.35 | 7: iteration 7450/ 7508 | consumed samples: 1907200 | consumed tokens: 3905945600 | elapsed time per iteration (s): 0.29 | learning rate: 2.003E-05 | global batch size: 256 | lm loss: 3.770854E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.329 | TFLOPs: 30.40 | 7: iteration 7460/ 7508 | consumed samples: 1909760 | consumed tokens: 3911188480 | elapsed time per iteration (s): 0.30 | learning rate: 2.002E-05 | global batch size: 256 | lm loss: 3.767321E+00 | grad norm: 0.389 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 867.744 | TFLOPs: 30.38 | 7: iteration 7470/ 7508 | consumed samples: 1912320 | consumed tokens: 3916431360 | elapsed time per iteration (s): 0.30 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 3.768142E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 866.289 | TFLOPs: 30.33 | 7: iteration 7480/ 7508 | consumed samples: 1914880 | consumed tokens: 3921674240 | elapsed time per iteration (s): 0.29 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 3.770027E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.084 | TFLOPs: 30.39 | 7: iteration 7490/ 7508 | consumed samples: 1917440 | consumed tokens: 3926917120 | elapsed time per iteration (s): 0.29 | learning rate: 2.000E-05 | global batch size: 256 | lm loss: 3.769112E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.594 | TFLOPs: 30.41 | 7: iteration 7500/ 7508 | consumed samples: 1920000 | consumed tokens: 3932160000 | elapsed time per iteration (s): 0.29 | learning rate: 2.000E-05 | global batch size: 256 | lm loss: 3.768458E+00 | grad norm: 0.399 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 868.181 | TFLOPs: 30.39 | 0: [after training is done] datetime: 2023-03-16 23:28:49 0: saving checkpoint at iteration 7508 to checkpoints_146m3b9100mdedup 7: ----------------------------------------------------------------------------------------------------------------- 7: validation loss at the end of training for val data | lm loss value: 3.904133E+00 | lm loss PPL: 4.960703E+01 | 7: ----------------------------------------------------------------------------------------------------------------- 0: [2023-03-16 23:28:49,986] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step7508 is begin to save! 0: [2023-03-16 23:28:49,989] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_01-model_00-model_states.pt... 0: [2023-03-16 23:28:50,075] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_01-model_00-model_states.pt. 0: [2023-03-16 23:28:50,075] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_03-model_00-model_states.pt... 0: [2023-03-16 23:28:50,093] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_03-model_00-model_states.pt. 0: [2023-03-16 23:28:50,093] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_04-model_00-model_states.pt... 0: [2023-03-16 23:28:50,108] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_04-model_00-model_states.pt. 0: [2023-03-16 23:28:50,108] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_05-model_00-model_states.pt... 0: [2023-03-16 23:28:50,123] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_05-model_00-model_states.pt. 0: [2023-03-16 23:28:50,123] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_06-model_00-model_states.pt... 0: [2023-03-16 23:28:50,138] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_06-model_00-model_states.pt. 0: [2023-03-16 23:28:50,139] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_07-model_00-model_states.pt... 0: [2023-03-16 23:28:50,154] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_07-model_00-model_states.pt. 0: [2023-03-16 23:28:50,154] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_08-model_00-model_states.pt... 0: [2023-03-16 23:28:50,169] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_08-model_00-model_states.pt. 0: [2023-03-16 23:28:50,169] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_09-model_00-model_states.pt... 0: [2023-03-16 23:28:50,184] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_09-model_00-model_states.pt. 0: [2023-03-16 23:28:50,184] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_10-model_00-model_states.pt... 0: [2023-03-16 23:28:50,199] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_10-model_00-model_states.pt. 0: [2023-03-16 23:28:50,200] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_11-model_00-model_states.pt... 0: [2023-03-16 23:28:50,215] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_11-model_00-model_states.pt. 0: [2023-03-16 23:28:50,215] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_12-model_00-model_states.pt... 0: [2023-03-16 23:28:50,230] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_12-model_00-model_states.pt. 0: [2023-03-16 23:28:50,230] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_13-model_00-model_states.pt... 0: [2023-03-16 23:28:50,245] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_13-model_00-model_states.pt. 0: [2023-03-16 23:28:50,245] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_14-model_00-model_states.pt... 0: [2023-03-16 23:28:50,260] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_14-model_00-model_states.pt. 0: [2023-03-16 23:28:50,261] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_15-model_00-model_states.pt... 0: [2023-03-16 23:28:50,276] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_15-model_00-model_states.pt. 0: [2023-03-16 23:28:50,276] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_16-model_00-model_states.pt... 0: [2023-03-16 23:28:50,291] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_16-model_00-model_states.pt. 0: [2023-03-16 23:28:50,291] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_17-model_00-model_states.pt... 0: [2023-03-16 23:28:50,306] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_17-model_00-model_states.pt. 0: [2023-03-16 23:28:50,306] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/layer_19-model_00-model_states.pt... 0: [2023-03-16 23:28:50,307] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/layer_19-model_00-model_states.pt. 0: [2023-03-16 23:28:50,308] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_146m3b9100mdedup/global_step7508/mp_rank_00_model_states.pt 0: [2023-03-16 23:28:50,308] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/mp_rank_00_model_states.pt... 0: [2023-03-16 23:28:50,310] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/mp_rank_00_model_states.pt. 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 6: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2023-03-16 23:28:50,327] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 0: [2023-03-16 23:28:50,359] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,361] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,361] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,361] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,361] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,361] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2023-03-16 23:28:50,361] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,362] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,362] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,362] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:28:50,363] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,363] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,363] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,363] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,363] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 5: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,363] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:28:50,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,365] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,365] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:28:50,365] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,365] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,365] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,365] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2023-03-16 23:28:50,365] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,367] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,367] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,367] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,367] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,368] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,368] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2023-03-16 23:28:50,368] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 3: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,367] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,367] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-16 23:28:50,369] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 1: [2023-03-16 23:28:50,369] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 4: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-16 23:28:50,371] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,371] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,371] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 2: [2023-03-16 23:28:50,371] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2023-03-16 23:28:50,371] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,372] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,372] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,372] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,372] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,372] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,372] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,373] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,373] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2023-03-16 23:28:50,373] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,373] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2023-03-16 23:28:50,373] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 6: [2023-03-16 23:28:50,373] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,374] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,374] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2023-03-16 23:28:50,374] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,374] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2023-03-16 23:28:50,374] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 7: [2023-03-16 23:28:50,374] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 5: [2023-03-16 23:28:50,376] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2023-03-16 23:28:50,376] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2023-03-16 23:28:50,376] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: [2023-03-16 23:28:50,388] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m3b9100mdedup/global_step7508/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2023-03-16 23:28:50,388] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step7508 is ready now! 0: successfully saved checkpoint at iteration 7508 to checkpoints_146m3b9100mdedup END 3326484: Thu 16 Mar 2023 11:28:55 PM EET