Model parameters: d_model 768 ffw_size 3072 kv_size 64 n_heads 12 n_layers 15 Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 15 --hidden-size 768 --num-attention-heads 12 --kv-channels 64 --ffn-hidden-size 3072 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 4 --global-batch-size 256 --train-samples 29_492_188 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-146m60b100mdedup --bf16 --checkpoint-activations --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 29_492_188 --lr-warmup-samples 294_922 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 100 --save-interval 10000 --eval-interval 10000 --eval-iters 1 --tensorboard-dir tensorboard_146m60b100mdedup --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_146m60b100mdedup --load checkpoints_146m60b100mdedup --train-weighted-split-paths-path train100mdedup.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3328801.json --zero-stage 0 START 3328801: Fri 17 Mar 2023 10:51:52 AM EET 0: 0: 0: ======================= ROCm System Management Interface ======================= 0: ================================= Concise Info ================================= 0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0: 0 49.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 2 41.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 4 38.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 5 51.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 6 40.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: ================================================================================ 0: ============================= End of ROCm SMI Log ============================== 5: 5: 5: ======================= ROCm System Management Interface ======================= 5: ================================= Concise Info ================================= 5: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 5: 0 49.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 2 43.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 4 43.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 6 41.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: ================================================================================ 5: ============================= End of ROCm SMI Log ============================== 6: 6: 6: ======================= ROCm System Management Interface ======================= 6: ================================= Concise Info ================================= 6: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 6: 0 47.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 2 44.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 3 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 4 43.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 5 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 6 39.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: ================================================================================ 6: ============================= End of ROCm SMI Log ============================== 2: 2: 2: ======================= ROCm System Management Interface ======================= 2: ================================= Concise Info ================================= 2: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 2: 0 47.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 2 43.0c 82.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 3 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 4 47.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 5 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 6 44.0c 83.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: ================================================================================ 2: ============================= End of ROCm SMI Log ============================== 3: 3: 3: ======================= ROCm System Management Interface ======================= 3: ================================= Concise Info ================================= 3: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 3: 0 51.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 2 39.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 3 41.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 4 41.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 5 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 6 43.0c 81.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: ================================================================================ 3: ============================= End of ROCm SMI Log ============================== 4: 4: 4: ======================= ROCm System Management Interface ======================= 4: ================================= Concise Info ================================= 4: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 4: 0 50.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 2 41.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 3 40.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 4 46.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 5 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 6 42.0c 82.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: ================================================================================ 4: ============================= End of ROCm SMI Log ============================== 7: 7: 7: ======================= ROCm System Management Interface ======================= 7: ================================= Concise Info ================================= 7: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 7: 0 47.0c 96.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 1 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 2 44.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 3 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 4 40.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 6 43.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 7 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: ================================================================================ 7: ============================= End of ROCm SMI Log ============================== 1: 1: 1: ======================= ROCm System Management Interface ======================= 1: ================================= Concise Info ================================= 1: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 1: 0 44.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 2 46.0c 81.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 3 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 4 50.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 5 50.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 6 42.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 7 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: ================================================================================ 1: ============================= End of ROCm SMI Log ============================== 5: Launching on nid006701 (5/8), master nid006696 port 9999, GPUs 8, CUDA: True 3: Launching on nid006699 (3/8), master nid006696 port 9999, GPUs 8, CUDA: True 4: Launching on nid006700 (4/8), master nid006696 port 9999, GPUs 8, CUDA: True 1: Launching on nid006697 (1/8), master nid006696 port 9999, GPUs 8, CUDA: True 7: Launching on nid006703 (7/8), master nid006696 port 9999, GPUs 8, CUDA: True 2: Launching on nid006698 (2/8), master nid006696 port 9999, GPUs 8, CUDA: True 6: Launching on nid006702 (6/8), master nid006696 port 9999, GPUs 8, CUDA: True 0: Launching on nid006696 (0/8), master nid006696 port 9999, GPUs 8, CUDA: True 0: using world size: 64, data-parallel-size: 64, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 0: using torch.bfloat16 for parameters ... 0: ------------------------ arguments ------------------------ 0: abort_on_unmet_fused_kernel_constraints ......... False 0: accumulate_allreduce_grads_in_fp32 .............. True 0: adam_beta1 ...................................... 0.9 0: adam_beta2 ...................................... 0.999 0: adam_eps ........................................ 1e-08 0: adlr_autoresume ................................. False 0: adlr_autoresume_interval ........................ 1000 0: apply_query_key_layer_scaling ................... True 0: apply_residual_connection_post_layernorm ........ False 0: attention_dropout ............................... 0.1 0: attention_softmax_in_fp32 ....................... False 0: bert_binary_head ................................ True 0: bert_load ....................................... None 0: bf16 ............................................ True 0: bias_dropout_fusion ............................. True 0: bias_gelu_fusion ................................ True 0: biencoder_projection_dim ........................ 0 0: biencoder_shared_query_context_model ............ False 0: block_data_path ................................. None 0: checkpoint_activations .......................... True 0: checkpoint_in_cpu ............................... False 0: checkpoint_num_layers ........................... 1 0: clip_grad ....................................... 1.0 0: codecarbon_dir .................................. None 0: consumed_train_samples .......................... 0 0: consumed_train_tokens ........................... 0 0: consumed_valid_samples .......................... 0 0: contigious_checkpointing ........................ False 0: cpu_optimizer ................................... False 0: cpu_torch_adam .................................. False 0: curriculum_learning ............................. False 0: data_impl ....................................... mmap 0: data_parallel_size .............................. 64 0: data_path ....................................... None 0: dataloader_type ................................. single 0: DDP_impl ........................................ local 0: decoder_seq_length .............................. None 0: deepscale ....................................... False 0: deepscale_config ................................ None 0: deepspeed ....................................... True 0: deepspeed_activation_checkpointing .............. False 0: deepspeed_config ................................ ds_configs/3328801.json 0: deepspeed_mpi ................................... False 0: distribute_checkpointed_activations ............. False 0: distributed_backend ............................. nccl 0: embed_layernorm ................................. False 0: embedding_path .................................. None 0: encoder_seq_length .............................. 2048 0: eod_mask_loss ................................... False 0: eval_interval ................................... 10000 0: eval_iters ...................................... 1 0: eval_only ....................................... None 0: evidence_data_path .............................. None 0: exit_duration_in_mins ........................... None 0: exit_interval ................................... None 0: ffn_hidden_size ................................. 3072 0: finetune ........................................ False 0: fp16 ............................................ False 0: fp16_lm_cross_entropy ........................... False 0: fp32_residual_connection ........................ False 0: gigaflos_no_embeds .............................. 0 0: global_batch_size ............................... 256 0: glu_activation .................................. None 0: hidden_dropout .................................. 0.1 0: hidden_size ..................................... 768 0: hysteresis ...................................... 2 0: ict_head_size ................................... None 0: ict_load ........................................ None 0: img_dim ......................................... 224 0: indexer_batch_size .............................. 128 0: indexer_log_interval ............................ 1000 0: inference ....................................... False 0: init_method_std ................................. 0.02 0: init_method_xavier_uniform ...................... False 0: initial_loss_scale .............................. 4294967296 0: kill_switch_path ................................ kill-switch-146m60b100mdedup 0: kv_channels ..................................... 64 0: layer_norm_fusion ............................... True 0: layernorm_epsilon ............................... 1e-05 0: lazy_mpu_init ................................... None 0: load ............................................ checkpoints_146m60b100mdedup 0: local_rank ...................................... None 0: log_batch_size_to_tensorboard ................... True 0: log_interval .................................... 100 0: log_learning_rate_to_tensorboard ................ True 0: log_level ....................................... None 0: log_level_replica ............................... None 0: log_loss_scale_to_tensorboard ................... True 0: log_num_zeros_in_grad ........................... False 0: log_params_norm ................................. False 0: log_path ........................................ None 0: log_timers_to_tensorboard ....................... True 0: log_validation_ppl_to_tensorboard ............... True 0: loss_on_targets_only ............................ False 0: loss_scale ...................................... 12.0 0: loss_scale_window ............................... 1000 0: lr .............................................. 0.0002 0: lr_decay_iters .................................. None 0: lr_decay_samples ................................ 29492188 0: lr_decay_style .................................. cosine 0: lr_decay_tokens ................................. None 0: lr_warmup_fraction .............................. None 0: lr_warmup_iters ................................. 0 0: lr_warmup_samples ............................... 294922 0: make_vocab_size_divisible_by .................... 128 0: mask_prob ....................................... 0.15 0: masked_softmax_fusion ........................... True 0: max_position_embeddings ......................... 2048 0: mean_noise_span_length .......................... None 0: memory_centric_tiled_linear ..................... False 0: merge_file ...................................... gpt2/merges.txt 0: micro_batch_size ................................ 4 0: min_loss_scale .................................. 1.0 0: min_lr .......................................... 2e-05 0: mmap_warmup ..................................... False 0: no_load_optim ................................... None 0: no_load_rng ..................................... None 0: no_save_optim ................................... None 0: no_save_rng ..................................... None 0: noise_density ................................... None 0: num_attention_heads ............................. 12 0: num_channels .................................... 3 0: num_classes ..................................... 1000 0: num_layers ...................................... 15 0: num_layers_per_virtual_pipeline_stage ........... None 0: num_workers ..................................... 2 0: onnx_safe ....................................... None 0: openai_gelu ..................................... False 0: optimizer ....................................... adam 0: optimizer_fusion ................................ True 0: override_lr_scheduler ........................... False 0: pad_vocab_size_to ............................... None 0: params_dtype .................................... torch.bfloat16 0: partition_activations ........................... False 0: patch_dim ....................................... 16 0: pipeline_model_parallel_size .................... 1 0: position_embedding_type ......................... PositionEmbeddingType.absolute 0: pp_partition_method ............................. None 0: profile_backward ................................ False 0: query_in_block_prob ............................. 0.1 0: rampup_batch_size ............................... None 0: rank ............................................ 0 0: remote_device ................................... none 0: reset_attention_mask ............................ False 0: reset_position_ids .............................. False 0: reset_progress .................................. None 0: retriever_report_topk_accuracies ................ [] 0: retriever_score_scaling ......................... False 0: retriever_seq_length ............................ 256 0: reweight_loss_based_on_position_frequency ....... False 0: sample_rate ..................................... 1.0 0: save ............................................ checkpoints_146m60b100mdedup 0: save_interval ................................... 10000 0: scatter_gather_tensors_in_pipeline .............. True 0: scattered_embeddings ............................ False 0: seed ............................................ 1234 0: seq_length ...................................... 2048 0: sgd_momentum .................................... 0.9 0: short_seq_prob .................................. 0.1 0: skip_train_iteration_range ...................... None 0: split ........................................... None 0: split_transformers .............................. False 0: sync_tp_duplicated_parameters ................... False 0: synchronize_each_layer .......................... False 0: tensor_model_parallel_size ...................... 1 0: tensorboard_dir ................................. tensorboard_146m60b100mdedup 0: tensorboard_log_interval ........................ 1 0: tensorboard_queue_size .......................... 5 0: test_weighted_split_paths ....................... None 0: test_weighted_split_paths_path .................. None 0: tile_factor ..................................... 1 0: titles_data_path ................................ None 0: tokenizer_name_or_path .......................... None 0: tokenizer_type .................................. GPT2BPETokenizer 0: train_iters ..................................... None 0: train_samples ................................... 29492188 0: train_tokens .................................... None 0: train_weighted_split_names ...................... ['train'] 0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document']] 0: train_weighted_split_paths_path ................. None 0: train_weighted_split_splits ..................... [['0:1']] 0: train_weighted_split_weights .................... [['1.0']] 0: universal_checkpoint ............................ False 0: use_bnb_optimizer ............................... False 0: use_checkpoint_lr_scheduler ..................... False 0: use_contiguous_buffers_in_ddp ................... True 0: use_cpu_initialization .......................... None 0: use_one_sent_docs ............................... False 0: use_pin_memory .................................. False 0: valid_num_workers ............................... 2 0: valid_weighted_split_names ...................... ['validation'] 0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']] 0: valid_weighted_split_paths_path ................. None 0: valid_weighted_split_splits ..................... [['0:1']] 0: valid_weighted_split_weights .................... [['1.0']] 0: virtual_pipeline_model_parallel_size ............ None 0: vocab_extra_ids ................................. 0 0: vocab_file ...................................... gpt2/vocab.json 0: weight_decay .................................... 0.1 0: world_size ...................................... 64 0: zero_allgather_bucket_size ...................... 0.0 0: zero_contigious_gradients ....................... False 0: zero_reduce_bucket_size ......................... 0.0 0: zero_reduce_scatter ............................. False 0: zero_stage ...................................... 0 0: -------------------- end of arguments --------------------- 0: setting number of micro-batches to constant 1 0: > building GPT2BPETokenizer tokenizer ... 0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) 0: DeepSpeed general environment info: 0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] 0: torch version .................... 1.13.0+rocm5.2 0: torch cuda version ............... None 0: torch hip version ................ 5.2.21151-afdc89f8 0: nvcc version ..................... None 0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] 0: deepspeed info ................... 0.7.5, unknown, unknown 0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 0: > initializing torch distributed ... 0: [2023-03-17 10:54:15,674] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 7: > setting tensorboard ... 0: > initializing tensor model parallel with size 1 0: > initializing pipeline model parallel with size 1 0: > setting random seeds to 1234 ... 0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 0: > compiling dataset index builder ... 0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: make: Nothing to be done for 'default'. 0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: >>> done with dataset index builder. Compilation time: 0.088 seconds 0: > compiling and loading fused kernels ... 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 87 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 63 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 67 0: ninja: no work to do. 0: >>> done with compiling and loading fused kernels. Compilation time: 32.306 seconds 0: time to initialize megatron (seconds): 90.883 0: [after megatron is initialized] datetime: 2023-03-17 10:54:50 0: building GPT model ... 0: [2023-03-17 10:54:51,029] [INFO] [utils.py:827:see_memory_usage] Before Building Model 0: [2023-03-17 10:54:51,030] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 0: [2023-03-17 10:54:51,030] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.42 GB, percent = 6.2% 0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15, ProcessCoord(pipe=0, data=16, model=0): 16, ProcessCoord(pipe=0, data=17, model=0): 17, ProcessCoord(pipe=0, data=18, model=0): 18, ProcessCoord(pipe=0, data=19, model=0): 19, ProcessCoord(pipe=0, data=20, model=0): 20, ProcessCoord(pipe=0, data=21, model=0): 21, ProcessCoord(pipe=0, data=22, model=0): 22, ProcessCoord(pi 0: pe=0, data=23, model=0): 23, ProcessCoord(pipe=0, data=24, model=0): 24, ProcessCoord(pipe=0, data=25, model=0): 25, ProcessCoord(pipe=0, data=26, model=0): 26, ProcessCoord(pipe=0, data=27, model=0): 27, ProcessCoord(pipe=0, data=28, model=0): 28, ProcessCoord(pipe=0, data=29, model=0): 29, ProcessCoord(pipe=0, data=30, model=0): 30, ProcessCoord(pipe=0, data=31, model=0): 31, ProcessCoord(pipe=0, data=32, model=0): 32, ProcessCoord(pipe=0, data=33, model=0): 33, ProcessCoord(pipe=0, data=34, model=0): 34, ProcessCoord(pipe=0, data=35, model=0): 35, ProcessCoord(pipe=0, data=36, model=0): 36, ProcessCoord(pipe=0, data=37, model=0): 37, ProcessCoord(pipe=0, data=38, model=0): 38, ProcessCoord(pipe=0, data=39, model=0): 39, ProcessCoord(pipe=0, data=40, model=0): 40, ProcessCoord(pipe=0, data=41, model=0): 41, ProcessCoord(pipe=0, data=42, model=0): 42, ProcessCoord(pipe=0, data=43, model=0): 43, ProcessCoord(pipe=0, data=44, model=0): 44, ProcessCoord(pipe=0, data=45, model=0): 45, ProcessCoord(pipe=0, data=4 0: 6, model=0): 46, ProcessCoord(pipe=0, data=47, model=0): 47, ProcessCoord(pipe=0, data=48, model=0): 48, ProcessCoord(pipe=0, data=49, model=0): 49, ProcessCoord(pipe=0, data=50, model=0): 50, ProcessCoord(pipe=0, data=51, model=0): 51, ProcessCoord(pipe=0, data=52, model=0): 52, ProcessCoord(pipe=0, data=53, model=0): 53, ProcessCoord(pipe=0, data=54, model=0): 54, ProcessCoord(pipe=0, data=55, model=0): 55, ProcessCoord(pipe=0, data=56, model=0): 56, ProcessCoord(pipe=0, data=57, model=0): 57, ProcessCoord(pipe=0, data=58, model=0): 58, ProcessCoord(pipe=0, data=59, model=0): 59, ProcessCoord(pipe=0, data=60, model=0): 60, ProcessCoord(pipe=0, data=61, model=0): 61, ProcessCoord(pipe=0, data=62, model=0): 62, ProcessCoord(pipe=0, data=63, model=0): 63} 0: [2023-03-17 10:54:53,032] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer 0: stage=0 layers=22 0: 0: _to_float16 0: 1: EmbeddingPipe 0: 2: 0: 3: ParallelTransformerLayerPipe 0: 4: ParallelTransformerLayerPipe 0: 5: ParallelTransformerLayerPipe 0: 6: ParallelTransformerLayerPipe 0: 7: ParallelTransformerLayerPipe 0: 8: ParallelTransformerLayerPipe 0: 9: ParallelTransformerLayerPipe 0: 10: ParallelTransformerLayerPipe 0: 11: ParallelTransformerLayerPipe 0: 12: ParallelTransformerLayerPipe 0: 13: ParallelTransformerLayerPipe 0: 14: ParallelTransformerLayerPipe 0: 15: ParallelTransformerLayerPipe 0: 16: ParallelTransformerLayerPipe 0: 17: ParallelTransformerLayerPipe 0: 18: undo 0: 19: MixedFusedLayerNorm 0: 20: EmbeddingPipe 0: 21: float16_to_fp32 0: loss: CrossEntropy 0: [2023-03-17 10:54:53,218] [INFO] [utils.py:827:see_memory_usage] After Building Model 0: [2023-03-17 10:54:53,219] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.28 GB CA 0.29 GB Max_CA 0 GB 0: [2023-03-17 10:54:53,219] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.44 GB, percent = 6.2% 0: setting training iterations to 115203 0: > learning rate decay style: cosine 0: DeepSpeed is enabled. 0: [2023-03-17 10:54:53,221] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown 0: [2023-03-17 10:55:06,090] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 0: [2023-03-17 10:55:06,091] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer 0: [2023-03-17 10:55:06,091] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer 0: [2023-03-17 10:55:06,095] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam 0: [2023-03-17 10:55:06,095] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer 0: [2023-03-17 10:55:06,218] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer 0: [2023-03-17 10:55:06,218] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.29 GB CA 0.31 GB Max_CA 0 GB 0: [2023-03-17 10:55:06,218] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.11 GB, percent = 6.4% 0: ninja: no work to do. 0: Time to load utils op: 0.1616666316986084 seconds 7: Time to load utils op: 0.21071195602416992 seconds 0: Time to load utils op: 0.10206341743469238 seconds 0: Time to load utils op: 0.20372295379638672 seconds 0: Time to load utils op: 0.20409560203552246 seconds 0: Time to load utils op: 0.204179048538208 seconds 0: Time to load utils op: 0.20382452011108398 seconds 0: Time to load utils op: 0.20421385765075684 seconds 0: Time to load utils op: 0.20386910438537598 seconds 7: Time to load utils op: 0.20311665534973145 seconds 7: Time to load utils op: 0.20299196243286133 seconds 7: Time to load utils op: 0.2022860050201416 seconds 7: Time to load utils op: 0.20218372344970703 seconds 7: Time to load utils op: 0.20304298400878906 seconds 7: Time to load utils op: 0.20338654518127441 seconds 7: Time to load utils op: 0.2032768726348877 seconds 0: Time to load utils op: 0.0006632804870605469 seconds 3: Time to load utils op: 0.21075129508972168 seconds 3: Time to load utils op: 0.21100497245788574 seconds 3: Time to load utils op: 0.21173596382141113 secondsTime to load utils op: 0.21091890335083008 secondsTime to load utils op: 0.2109541893005371 seconds 3: 3: 3: Time to load utils op: 0.21158218383789062 secondsTime to load utils op: 0.21152067184448242 seconds 3: 3: Time to load utils op: 0.21098542213439941 seconds 1: Time to load utils op: 0.21160364151000977 secondsTime to load utils op: 0.2116079330444336 seconds 1: 1: Time to load utils op: 0.21162986755371094 seconds 1: Time to load utils op: 0.2116234302520752 seconds 1: Time to load utils op: 0.21163439750671387 seconds 1: Time to load utils op: 0.21165013313293457 secondsTime to load utils op: 0.2116408348083496 seconds 1: Time to load utils op: 0.21163582801818848 seconds 1: 5: Time to load utils op: 0.21373271942138672 seconds 5: Time to load utils op: 0.2133028507232666 seconds 5: Time to load utils op: 0.2139585018157959 seconds 2: Time to load utils op: 0.2124171257019043 seconds 5: Time to load utils op: 0.21288800239562988 seconds 2: Time to load utils op: 0.212446928024292 seconds 5: Time to load utils op: 0.2128303050994873 seconds 5: Time to load utils op: 0.21289944648742676 seconds 2: Time to load utils op: 0.21245408058166504 seconds 2: Time to load utils op: 0.21247625350952148 secondsTime to load utils op: 0.21247315406799316 secondsTime to load utils op: 0.2124781608581543 seconds 2: 2: 2: Time to load utils op: 0.21248650550842285 seconds 2: Time to load utils op: 0.2124950885772705 seconds 5: Time to load utils op: 0.21373796463012695 secondsTime to load utils op: 0.21348834037780762 seconds 5: 4: Time to load utils op: 0.21089768409729004 seconds 4: Time to load utils op: 0.21091270446777344 seconds 4: Time to load utils op: 0.21090078353881836 seconds 4: Time to load utils op: 0.21094107627868652 seconds 4: Time to load utils op: 0.21095561981201172 seconds 4: Time to load utils op: 0.2109689712524414 seconds 4: Time to load utils op: 0.21097517013549805 secondsTime to load utils op: 0.2109835147857666 seconds 4: 0: Time to load utils op: 0.0003943443298339844 seconds 0: Time to load utils op: 0.00036716461181640625 seconds 0: Time to load utils op: 0.00037288665771484375 seconds 0: Time to load utils op: 0.00037598609924316406 seconds 0: Time to load utils op: 0.0004210472106933594 seconds 0: Time to load utils op: 0.00040030479431152344 seconds 6: Time to load utils op: 0.2099611759185791 seconds 6: Time to load utils op: 0.20997357368469238 seconds 6: Time to load utils op: 0.20997214317321777 seconds 6: Time to load utils op: 0.20998597145080566 seconds 6: Time to load utils op: 0.20998835563659668 seconds 6: Time to load utils op: 0.21000075340270996 seconds 6: Time to load utils op: 0.21000933647155762 secondsTime to load utils op: 0.21001315116882324 seconds 6: 7: Time to load utils op: 0.0004706382751464844 seconds 7: Time to load utils op: 0.0005176067352294922 seconds 7: Time to load utils op: 0.00040984153747558594 seconds 7: Time to load utils op: 0.0004029273986816406 seconds 7: Time to load utils op: 0.0005064010620117188 seconds 7: Time to load utils op: 0.0005266666412353516 seconds 7: Time to load utils op: 0.0005056858062744141 secondsTime to load utils op: 0.0005810260772705078 seconds 7: 3: Time to load utils op: 0.0008251667022705078 seconds 3: Time to load utils op: 0.0012340545654296875 seconds 3: Time to load utils op: 0.0012540817260742188 seconds 3: Time to load utils op: 0.0013682842254638672 seconds 3: Time to load utils op: 0.0013577938079833984 secondsTime to load utils op: 0.0014452934265136719 seconds 3: 3: Time to load utils op: 0.0013592243194580078 seconds 3: Time to load utils op: 0.0014255046844482422 seconds 1: Time to load utils op: 0.0009007453918457031 seconds 1: Time to load utils op: 0.0011806488037109375 seconds 1: Time to load utils op: 0.0012178421020507812 seconds 1: Time to load utils op: 0.0013043880462646484 seconds 1: Time to load utils op: 0.0013065338134765625 seconds 1: Time to load utils op: 0.0012924671173095703 seconds 1: Time to load utils op: 0.0013000965118408203 seconds 1: Time to load utils op: 0.001367330551147461 seconds 5: Time to load utils op: 0.0007681846618652344 seconds 5: Time to load utils op: 0.0008456707000732422 seconds 5: Time to load utils op: 0.0008687973022460938 seconds 5: Time to load utils op: 0.0010814666748046875 seconds 5: Time to load utils op: 0.001043558120727539 seconds 5: Time to load utils op: 0.0010788440704345703 seconds 5: Time to load utils op: 0.0010094642639160156 seconds 5: Time to load utils op: 0.0010979175567626953 seconds 2: Time to load utils op: 0.0009377002716064453 seconds 2: Time to load utils op: 0.0010869503021240234 seconds 2: Time to load utils op: 0.0011873245239257812 seconds 2: Time to load utils op: 0.001165628433227539 seconds 2: Time to load utils op: 0.0011920928955078125 secondsTime to load utils op: 0.0011935234069824219 seconds 2: Time to load utils op: 0.0011818408966064453 seconds 2: 2: Time to load utils op: 0.0012378692626953125 seconds 6: Time to load utils op: 0.0009183883666992188 seconds 6: Time to load utils op: 0.0009279251098632812 seconds 6: Time to load utils op: 0.000993967056274414 seconds 6: Time to load utils op: 0.001125335693359375 seconds 6: Time to load utils op: 0.0012607574462890625 seconds 6: Time to load utils op: 0.0012471675872802734 secondsTime to load utils op: 0.0011706352233886719 seconds 6: 6: Time to load utils op: 0.001268625259399414 seconds 4: Time to load utils op: 0.0008640289306640625 seconds 4: Time to load utils op: 0.0008463859558105469 seconds 4: Time to load utils op: 0.0010762214660644531 seconds 4: Time to load utils op: 0.0010652542114257812 secondsTime to load utils op: 0.0010116100311279297 secondsTime to load utils op: 0.0010304450988769531 seconds 4: 4: 4: Time to load utils op: 0.0010433197021484375 seconds 4: Time to load utils op: 0.001065969467163086 seconds 0: [2023-03-17 10:55:06,444] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 0: [2023-03-17 10:55:06,444] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.28 GB CA 0.31 GB Max_CA 0 GB 0: [2023-03-17 10:55:06,445] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:06,558] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 0: [2023-03-17 10:55:06,558] [INFO] [utils.py:828:see_memory_usage] MA 0.62 GB Max_MA 0.62 GB CA 0.82 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,559] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:06,660] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 0: [2023-03-17 10:55:06,661] [INFO] [utils.py:828:see_memory_usage] MA 0.62 GB Max_MA 0.62 GB CA 0.82 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,661] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.27 GB, percent = 6.4% 0: [2023-03-17 10:55:06,764] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 0: [2023-03-17 10:55:06,764] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,765] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:06,865] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 0: [2023-03-17 10:55:06,865] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,866] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:06,969] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 0: [2023-03-17 10:55:06,969] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,970] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:07,073] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer 0: [2023-03-17 10:55:07,074] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:07,074] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:07,182] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer 0: [2023-03-17 10:55:07,183] [INFO] [utils.py:828:see_memory_usage] MA 0.85 GB Max_MA 0.85 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:07,183] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:07,285] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer 0: [2023-03-17 10:55:07,285] [INFO] [utils.py:828:see_memory_usage] MA 0.85 GB Max_MA 0.85 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:07,285] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 32.26 GB, percent = 6.4% 0: [2023-03-17 10:55:07,286] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam 0: [2023-03-17 10:55:07,286] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler 0: [2023-03-17 10:55:07,286] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = 0: [2023-03-17 10:55:07,286] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 0: [2023-03-17 10:55:07,286] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] activation_checkpointing_config { 0: "partition_activations": false, 0: "contiguous_memory_optimization": false, 0: "cpu_checkpointing": false, 0: "number_checkpoints": null, 0: "synchronize_checkpoint_boundary": false, 0: "profile": false 0: } 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] amp_enabled .................. False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] amp_params ................... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] autotuning_config ............ { 0: "enabled": false, 0: "start_step": null, 0: "end_step": null, 0: "metric_path": null, 0: "arg_mappings": null, 0: "metric": "throughput", 0: "model_info": null, 0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", 0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", 0: "overwrite": true, 0: "fast": true, 0: "start_profile_step": 3, 0: "end_profile_step": 5, 0: "tuner_type": "gridsearch", 0: "tuner_early_stopping": 5, 0: "tuner_num_trials": 50, 0: "model_info_path": null, 0: "mp_size": 1, 0: "max_train_batch_size": null, 0: "min_train_batch_size": 1, 0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 0: "min_train_micro_batch_size_per_gpu": 1, 0: "num_tuning_micro_batch_sizes": 3 0: } 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] bfloat16_enabled ............. True 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] comms_config ................. 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] communication_data_type ...... None 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa 0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] curriculum_enabled ........... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] curriculum_params ............ False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] dataloader_drop_last ......... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] disable_allgather ............ False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] dump_state ................... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] elasticity_enabled ........... False 0: [2023-03-17 10:55:07,287] [INFO] [config.py:1011:print] flops_profiler_config ........ { 0: "enabled": false, 0: "profile_step": 1, 0: "module_depth": -1, 0: "top_modules": 1, 0: "detailed": true, 0: "output_file": null 0: } 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] fp16_auto_cast ............... None 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] fp16_enabled ................. False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] global_rank .................. 0 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] load_universal_checkpoint .... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] loss_scale ................... 1.0 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] memory_breakdown ............. False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] monitor_config ............... 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] nebula_config ................ { 0: "enabled": false, 0: "persistent_storage_path": null, 0: "persistent_time_interval": 100, 0: "num_of_version_in_retention": 2, 0: "enable_nebula_load": true, 0: "load_path": null 0: } 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] optimizer_name ............... None 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] optimizer_params ............. None 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] pld_enabled .................. False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] pld_params ................... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] prescale_gradients ........... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] scheduler_name ............... None 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] scheduler_params ............. None 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] sparse_attention ............. None 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] steps_per_print .............. 2000 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] train_batch_size ............. 256 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 4 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] use_node_local_storage ....... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] world_size ................... 64 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] zero_enabled ................. False 0: [2023-03-17 10:55:07,288] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 0: [2023-03-17 10:55:07,288] [INFO] [config.py:996:print_user_config] json = { 0: "train_micro_batch_size_per_gpu": 4, 0: "train_batch_size": 256, 0: "gradient_clipping": 1.0, 0: "zero_optimization": { 0: "stage": 0 0: }, 0: "bf16": { 0: "enabled": true 0: }, 0: "steps_per_print": 2.000000e+03, 0: "wall_clock_breakdown": false 0: } 0: Time to load utils op: 0.0004229545593261719 seconds 0: [2023-03-17 10:55:07,289] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=4 0: [2023-03-17 10:55:07,342] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=22 [0, 22) STAGE_PARAMS=146525952 (146.526M) TOTAL_PARAMS=146525952 (146.526M) UNIQUE_PARAMS=146525952 (146.526M) 0: [2023-03-17 10:55:07,350] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: WARNING: could not find the metadata file checkpoints_146m60b100mdedup 0: will not load any checkpoints and will start from random 4: [2023-03-17 10:55:07,350] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,350] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,350] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,350] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,350] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:07,351] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,352] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:07,352] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m60b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: time (ms) | load-checkpoint: 7.29 0: estimated model parameters: 0.146525952 0: estimated model parameters without embeddings: 0.106319616 0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-03-17 10:55:08 0: > building train, validation, and test datasets ... 0: > datasets target sizes (minimum size): 0: train: 29492188 0: validation: 3072 0: test: 256 0: > building train, validation, and test datasets for GPT ... 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.007314 seconds 0: number of documents: 409500 0: > dataset split: 0: train: 0: document indices in [0, 409500) total of 409500 documents 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > last epoch number of samples (40370) is smaller than 95.0% of number of samples per epoch (48281), setting separate_last_epoch to True 0: > elasped time to build and save doc-idx mapping (seconds): 18.357756 0: using: 0: number of documents: 409500 0: number of epochs: 611 0: sequence length: 2048 0: total number of samples: 29500100 0: > elasped time to build and save sample-idx mapping (seconds): 1.101130 0: > building shuffle index with split [0, 29451818) and [29451818, 29500100) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 1.271609 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_29492188ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_29492188ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_29492188ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.037 seconds 0: total number of samples: 29500101 0: total number of epochs: 611 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.034014 seconds 0: number of documents: 364608 0: > dataset split: 0: validation: 0: document indices in [0, 364608) total of 364608 documents 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_3072ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_3072ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_3072ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.224 seconds 0: total number of samples: 84978 0: total number of epochs: 1 0: > finished creating GPT datasets ... 0: [after dataloaders are built] datetime: 2023-03-17 10:55:43 0: done with setup ... 0: training ... 0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: 7: time (ms) | model-and-optimizer-setup: 17548.75 | train/valid/test-data-iterators-setup: 34217.99 0: [000-000] 0.1465B / 0.1063B 0: [before the start of training step] datetime: 2023-03-17 10:55:43 0: [2023-03-17 10:55:44,879] [INFO] [checkpointing.py:553:forward] Activation Checkpointing Information 0: [2023-03-17 10:55:44,879] [INFO] [checkpointing.py:554:forward] ----Partition Activations False, CPU CHECKPOINTING False 0: [2023-03-17 10:55:44,879] [INFO] [checkpointing.py:557:forward] ----contiguous Memory Checkpointing False with None total layers 0: [2023-03-17 10:55:44,879] [INFO] [checkpointing.py:560:forward] ----Synchronization False 0: [2023-03-17 10:55:44,879] [INFO] [checkpointing.py:561:forward] ----Profiling time in checkpointing False 0: [Rank 0] (after 100 iterations) memory (MB) | allocated: 2728.54736328125 | max allocated: 5305.046875 | reserved: 6818.0 | max reserved: 6818.0 7: iteration 100/ 115203 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 0.50 | learning rate: 1.736E-05 | global batch size: 256 | lm loss: 9.446750E+00 | grad norm: 1.451 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 512.444 | TFLOPs: 23.92 | 7: iteration 200/ 115203 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 0.54 | learning rate: 3.472E-05 | global batch size: 256 | lm loss: 7.552830E+00 | grad norm: 1.046 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 474.875 | TFLOPs: 22.17 | 7: iteration 300/ 115203 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 0.38 | learning rate: 5.208E-05 | global batch size: 256 | lm loss: 6.662783E+00 | grad norm: 0.549 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.158 | TFLOPs: 31.33 | 7: iteration 400/ 115203 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 0.38 | learning rate: 6.944E-05 | global batch size: 256 | lm loss: 6.340851E+00 | grad norm: 0.798 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.633 | TFLOPs: 31.82 | 7: iteration 500/ 115203 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 0.38 | learning rate: 8.680E-05 | global batch size: 256 | lm loss: 6.160777E+00 | grad norm: 1.057 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.201 | TFLOPs: 31.61 | 7: iteration 600/ 115203 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 0.38 | learning rate: 1.042E-04 | global batch size: 256 | lm loss: 6.011188E+00 | grad norm: 0.692 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.442 | TFLOPs: 31.29 | 7: iteration 700/ 115203 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 0.38 | learning rate: 1.215E-04 | global batch size: 256 | lm loss: 5.835112E+00 | grad norm: 1.059 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.723 | TFLOPs: 31.31 | 7: iteration 800/ 115203 | consumed samples: 204800 | consumed tokens: 419430400 | elapsed time per iteration (s): 0.38 | learning rate: 1.389E-04 | global batch size: 256 | lm loss: 5.668885E+00 | grad norm: 0.779 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.996 | TFLOPs: 31.41 | 7: iteration 900/ 115203 | consumed samples: 230400 | consumed tokens: 471859200 | elapsed time per iteration (s): 0.38 | learning rate: 1.562E-04 | global batch size: 256 | lm loss: 5.494742E+00 | grad norm: 0.641 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.975 | TFLOPs: 31.23 | 7: iteration 1000/ 115203 | consumed samples: 256000 | consumed tokens: 524288000 | elapsed time per iteration (s): 0.38 | learning rate: 1.736E-04 | global batch size: 256 | lm loss: 5.328548E+00 | grad norm: 0.539 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.624 | TFLOPs: 31.35 | 7: iteration 1100/ 115203 | consumed samples: 281600 | consumed tokens: 576716800 | elapsed time per iteration (s): 0.38 | learning rate: 1.910E-04 | global batch size: 256 | lm loss: 5.187463E+00 | grad norm: 0.552 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.551 | TFLOPs: 31.16 | 7: iteration 1200/ 115203 | consumed samples: 307200 | consumed tokens: 629145600 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 5.061819E+00 | grad norm: 0.672 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.555 | TFLOPs: 31.30 | 7: iteration 1300/ 115203 | consumed samples: 332800 | consumed tokens: 681574400 | elapsed time per iteration (s): 0.39 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.935373E+00 | grad norm: 0.510 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 656.048 | TFLOPs: 30.62 | 7: iteration 1400/ 115203 | consumed samples: 358400 | consumed tokens: 734003200 | elapsed time per iteration (s): 0.41 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.831989E+00 | grad norm: 0.566 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 630.783 | TFLOPs: 29.44 | 7: iteration 1500/ 115203 | consumed samples: 384000 | consumed tokens: 786432000 | elapsed time per iteration (s): 0.40 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.744005E+00 | grad norm: 0.578 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 640.482 | TFLOPs: 29.90 | 7: iteration 1600/ 115203 | consumed samples: 409600 | consumed tokens: 838860800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.673722E+00 | grad norm: 0.652 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.930 | TFLOPs: 31.18 | 7: iteration 1700/ 115203 | consumed samples: 435200 | consumed tokens: 891289600 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.621135E+00 | grad norm: 0.446 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.158 | TFLOPs: 31.23 | 7: iteration 1800/ 115203 | consumed samples: 460800 | consumed tokens: 943718400 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.571743E+00 | grad norm: 0.458 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.082 | TFLOPs: 31.18 | 7: iteration 1900/ 115203 | consumed samples: 486400 | consumed tokens: 996147200 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.531851E+00 | grad norm: 0.469 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.386 | TFLOPs: 31.29 | 0: [2023-03-17 11:08:59,392] [INFO] [logging.py:68:log_dist] [Rank 0] step=2000, skipped=0, lr=[0.0001999754506631688, 0.0001999754506631688, 0.0001999754506631688], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 2000/ 115203 | consumed samples: 512000 | consumed tokens: 1048576000 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.493342E+00 | grad norm: 0.507 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.083 | TFLOPs: 31.28 | 0: steps: 2000 loss: 4.4862 iter time (s): 0.396 samples/sec: 646.839 7: iteration 2100/ 115203 | consumed samples: 537600 | consumed tokens: 1101004800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.459721E+00 | grad norm: 0.470 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.621 | TFLOPs: 31.30 | 7: iteration 2200/ 115203 | consumed samples: 563200 | consumed tokens: 1153433600 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.425331E+00 | grad norm: 0.483 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.480 | TFLOPs: 31.25 | 7: iteration 2300/ 115203 | consumed samples: 588800 | consumed tokens: 1205862400 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.397895E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.980 | TFLOPs: 31.27 | 7: iteration 2400/ 115203 | consumed samples: 614400 | consumed tokens: 1258291200 | elapsed time per iteration (s): 0.39 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.370505E+00 | grad norm: 0.404 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 651.734 | TFLOPs: 30.42 | 7: iteration 2500/ 115203 | consumed samples: 640000 | consumed tokens: 1310720000 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.350138E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.724 | TFLOPs: 31.17 | 7: iteration 2600/ 115203 | consumed samples: 665600 | consumed tokens: 1363148800 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.326094E+00 | grad norm: 0.475 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.418 | TFLOPs: 31.20 | 7: iteration 2700/ 115203 | consumed samples: 691200 | consumed tokens: 1415577600 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.300583E+00 | grad norm: 0.343 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.842 | TFLOPs: 31.27 | 7: iteration 2800/ 115203 | consumed samples: 716800 | consumed tokens: 1468006400 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.282165E+00 | grad norm: 0.508 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.740 | TFLOPs: 31.26 | 7: iteration 2900/ 115203 | consumed samples: 742400 | consumed tokens: 1520435200 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.261350E+00 | grad norm: 0.369 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.649 | TFLOPs: 31.26 | 7: iteration 3000/ 115203 | consumed samples: 768000 | consumed tokens: 1572864000 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.242763E+00 | grad norm: 0.344 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.429 | TFLOPs: 31.29 | 7: iteration 3100/ 115203 | consumed samples: 793600 | consumed tokens: 1625292800 | elapsed time per iteration (s): 0.39 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.222490E+00 | grad norm: 0.345 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 659.965 | TFLOPs: 30.80 | 7: iteration 3200/ 115203 | consumed samples: 819200 | consumed tokens: 1677721600 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 4.207057E+00 | grad norm: 0.363 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.811 | TFLOPs: 31.31 | 7: iteration 3300/ 115203 | consumed samples: 844800 | consumed tokens: 1730150400 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 4.189385E+00 | grad norm: 0.380 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.372 | TFLOPs: 31.29 | 7: iteration 3400/ 115203 | consumed samples: 870400 | consumed tokens: 1782579200 | elapsed time per iteration (s): 0.39 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 4.175847E+00 | grad norm: 0.424 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 656.215 | TFLOPs: 30.63 | 7: iteration 3500/ 115203 | consumed samples: 896000 | consumed tokens: 1835008000 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 4.163806E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.482 | TFLOPs: 31.20 | 7: iteration 3600/ 115203 | consumed samples: 921600 | consumed tokens: 1887436800 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 4.145450E+00 | grad norm: 0.337 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 669.606 | TFLOPs: 31.25 | 7: iteration 3700/ 115203 | consumed samples: 947200 | consumed tokens: 1939865600 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 4.131512E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.587 | TFLOPs: 31.21 | 7: iteration 3800/ 115203 | consumed samples: 972800 | consumed tokens: 1992294400 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 4.119253E+00 | grad norm: 0.342 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.991 | TFLOPs: 31.37 | 7: iteration 3900/ 115203 | consumed samples: 998400 | consumed tokens: 2044723200 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 4.108139E+00 | grad norm: 0.314 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.659 | TFLOPs: 31.16 | 0: [2023-03-17 11:21:46,315] [INFO] [logging.py:68:log_dist] [Rank 0] step=4000, skipped=0, lr=[0.00019972320825211248, 0.00019972320825211248, 0.00019972320825211248], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 4000/ 115203 | consumed samples: 1024000 | consumed tokens: 2097152000 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 4.092965E+00 | grad norm: 0.389 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.748 | TFLOPs: 31.35 | 0: steps: 4000 loss: 4.0609 iter time (s): 0.381 samples/sec: 671.154 7: iteration 4100/ 115203 | consumed samples: 1049600 | consumed tokens: 2149580800 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 4.085840E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.709 | TFLOPs: 31.31 | 7: iteration 4200/ 115203 | consumed samples: 1075200 | consumed tokens: 2202009600 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 4.072473E+00 | grad norm: 0.340 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.321 | TFLOPs: 31.38 | 7: iteration 4300/ 115203 | consumed samples: 1100800 | consumed tokens: 2254438400 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 4.059429E+00 | grad norm: 0.330 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.382 | TFLOPs: 31.48 | 7: iteration 4400/ 115203 | consumed samples: 1126400 | consumed tokens: 2306867200 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 4.051357E+00 | grad norm: 0.341 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.683 | TFLOPs: 31.31 | 7: iteration 4500/ 115203 | consumed samples: 1152000 | consumed tokens: 2359296000 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 4.039245E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.515 | TFLOPs: 31.34 | 7: iteration 4600/ 115203 | consumed samples: 1177600 | consumed tokens: 2411724800 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 4.029874E+00 | grad norm: 0.462 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.187 | TFLOPs: 31.42 | 7: iteration 4700/ 115203 | consumed samples: 1203200 | consumed tokens: 2464153600 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 4.021346E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.095 | TFLOPs: 31.32 | 7: iteration 4800/ 115203 | consumed samples: 1228800 | consumed tokens: 2516582400 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 4.009039E+00 | grad norm: 0.345 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 671.872 | TFLOPs: 31.36 | 7: iteration 4900/ 115203 | consumed samples: 1254400 | consumed tokens: 2569011200 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.999212E+00 | grad norm: 0.304 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.263 | TFLOPs: 31.38 | 7: iteration 5000/ 115203 | consumed samples: 1280000 | consumed tokens: 2621440000 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.990288E+00 | grad norm: 0.425 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.439 | TFLOPs: 31.39 | 7: iteration 5100/ 115203 | consumed samples: 1305600 | consumed tokens: 2673868800 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.980733E+00 | grad norm: 0.353 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.527 | TFLOPs: 31.30 | 7: iteration 5200/ 115203 | consumed samples: 1331200 | consumed tokens: 2726297600 | elapsed time per iteration (s): 0.38 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.974338E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.784 | TFLOPs: 31.54 | 7: iteration 5300/ 115203 | consumed samples: 1356800 | consumed tokens: 2778726400 | elapsed time per iteration (s): 0.40 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.965930E+00 | grad norm: 0.331 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 642.622 | TFLOPs: 30.00 | 7: iteration 5400/ 115203 | consumed samples: 1382400 | consumed tokens: 2831155200 | elapsed time per iteration (s): 0.38 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.957897E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.184 | TFLOPs: 31.52 | 7: iteration 5500/ 115203 | consumed samples: 1408000 | consumed tokens: 2883584000 | elapsed time per iteration (s): 0.38 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.949813E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.849 | TFLOPs: 31.45 | 7: iteration 5600/ 115203 | consumed samples: 1433600 | consumed tokens: 2936012800 | elapsed time per iteration (s): 0.38 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.942898E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.217 | TFLOPs: 31.42 | 7: iteration 5700/ 115203 | consumed samples: 1459200 | consumed tokens: 2988441600 | elapsed time per iteration (s): 0.38 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.934567E+00 | grad norm: 0.322 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.239 | TFLOPs: 31.52 | 7: iteration 5800/ 115203 | consumed samples: 1484800 | consumed tokens: 3040870400 | elapsed time per iteration (s): 0.38 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.925726E+00 | grad norm: 0.354 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.766 | TFLOPs: 31.50 | 7: iteration 5900/ 115203 | consumed samples: 1510400 | consumed tokens: 3093299200 | elapsed time per iteration (s): 0.38 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.920110E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.769 | TFLOPs: 31.50 | 0: [2023-03-17 11:34:28,846] [INFO] [logging.py:68:log_dist] [Rank 0] step=6000, skipped=0, lr=[0.00019919872690019844, 0.00019919872690019844, 0.00019919872690019844], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 6000/ 115203 | consumed samples: 1536000 | consumed tokens: 3145728000 | elapsed time per iteration (s): 0.38 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.911740E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.973 | TFLOPs: 31.46 | 0: steps: 6000 loss: 3.8959 iter time (s): 0.379 samples/sec: 675.049 7: iteration 6100/ 115203 | consumed samples: 1561600 | consumed tokens: 3198156800 | elapsed time per iteration (s): 0.38 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.905255E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.898 | TFLOPs: 31.55 | 7: iteration 6200/ 115203 | consumed samples: 1587200 | consumed tokens: 3250585600 | elapsed time per iteration (s): 0.38 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.900909E+00 | grad norm: 0.293 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.581 | TFLOPs: 31.44 | 7: iteration 6300/ 115203 | consumed samples: 1612800 | consumed tokens: 3303014400 | elapsed time per iteration (s): 0.38 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.892238E+00 | grad norm: 0.331 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.522 | TFLOPs: 31.44 | 7: iteration 6400/ 115203 | consumed samples: 1638400 | consumed tokens: 3355443200 | elapsed time per iteration (s): 0.38 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.883895E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.926 | TFLOPs: 31.46 | 7: iteration 6500/ 115203 | consumed samples: 1664000 | consumed tokens: 3407872000 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.880302E+00 | grad norm: 0.361 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.662 | TFLOPs: 31.49 | 7: iteration 6600/ 115203 | consumed samples: 1689600 | consumed tokens: 3460300800 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.874327E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.816 | TFLOPs: 31.54 | 7: iteration 6700/ 115203 | consumed samples: 1715200 | consumed tokens: 3512729600 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.870846E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.107 | TFLOPs: 31.46 | 7: iteration 6800/ 115203 | consumed samples: 1740800 | consumed tokens: 3565158400 | elapsed time per iteration (s): 0.38 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.860056E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.715 | TFLOPs: 31.40 | 7: iteration 6900/ 115203 | consumed samples: 1766400 | consumed tokens: 3617587200 | elapsed time per iteration (s): 0.38 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.856631E+00 | grad norm: 0.338 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.420 | TFLOPs: 31.48 | 7: iteration 7000/ 115203 | consumed samples: 1792000 | consumed tokens: 3670016000 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.852606E+00 | grad norm: 0.331 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.026 | TFLOPs: 31.51 | 7: iteration 7100/ 115203 | consumed samples: 1817600 | consumed tokens: 3722444800 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.845426E+00 | grad norm: 0.326 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.103 | TFLOPs: 31.42 | 7: iteration 7200/ 115203 | consumed samples: 1843200 | consumed tokens: 3774873600 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.841217E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.025 | TFLOPs: 31.51 | 7: iteration 7300/ 115203 | consumed samples: 1868800 | consumed tokens: 3827302400 | elapsed time per iteration (s): 0.38 | learning rate: 1.987E-04 | global batch size: 256 | lm loss: 3.836478E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.488 | TFLOPs: 31.53 | 7: iteration 7400/ 115203 | consumed samples: 1894400 | consumed tokens: 3879731200 | elapsed time per iteration (s): 0.38 | learning rate: 1.987E-04 | global batch size: 256 | lm loss: 3.832570E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.384 | TFLOPs: 31.52 | 7: iteration 7500/ 115203 | consumed samples: 1920000 | consumed tokens: 3932160000 | elapsed time per iteration (s): 0.38 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 3.822859E+00 | grad norm: 0.306 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.495 | TFLOPs: 31.53 | 7: iteration 7600/ 115203 | consumed samples: 1945600 | consumed tokens: 3984588800 | elapsed time per iteration (s): 0.40 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 3.820311E+00 | grad norm: 0.322 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 647.148 | TFLOPs: 30.21 | 7: iteration 7700/ 115203 | consumed samples: 1971200 | consumed tokens: 4037017600 | elapsed time per iteration (s): 0.38 | learning rate: 1.985E-04 | global batch size: 256 | lm loss: 3.814128E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.362 | TFLOPs: 31.15 | 7: iteration 7800/ 115203 | consumed samples: 1996800 | consumed tokens: 4089446400 | elapsed time per iteration (s): 0.38 | learning rate: 1.985E-04 | global batch size: 256 | lm loss: 3.807684E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.567 | TFLOPs: 31.44 | 7: iteration 7900/ 115203 | consumed samples: 2022400 | consumed tokens: 4141875200 | elapsed time per iteration (s): 0.38 | learning rate: 1.984E-04 | global batch size: 256 | lm loss: 3.803421E+00 | grad norm: 0.365 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.566 | TFLOPs: 31.53 | 0: [2023-03-17 11:47:14,146] [INFO] [logging.py:68:log_dist] [Rank 0] step=8000, skipped=0, lr=[0.00019840359799331808, 0.00019840359799331808, 0.00019840359799331808], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 8000/ 115203 | consumed samples: 2048000 | consumed tokens: 4194304000 | elapsed time per iteration (s): 0.42 | learning rate: 1.984E-04 | global batch size: 256 | lm loss: 3.800918E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 606.557 | TFLOPs: 28.31 | 0: steps: 8000 loss: 3.8079 iter time (s): 0.381 samples/sec: 672.672 7: iteration 8100/ 115203 | consumed samples: 2073600 | consumed tokens: 4246732800 | elapsed time per iteration (s): 0.43 | learning rate: 1.984E-04 | global batch size: 256 | lm loss: 3.795524E+00 | grad norm: 0.348 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 595.599 | TFLOPs: 27.80 | 7: iteration 8200/ 115203 | consumed samples: 2099200 | consumed tokens: 4299161600 | elapsed time per iteration (s): 0.38 | learning rate: 1.983E-04 | global batch size: 256 | lm loss: 3.790910E+00 | grad norm: 0.336 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.532 | TFLOPs: 31.48 | 7: iteration 8300/ 115203 | consumed samples: 2124800 | consumed tokens: 4351590400 | elapsed time per iteration (s): 0.38 | learning rate: 1.983E-04 | global batch size: 256 | lm loss: 3.787971E+00 | grad norm: 0.366 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.415 | TFLOPs: 31.48 | 7: iteration 8400/ 115203 | consumed samples: 2150400 | consumed tokens: 4404019200 | elapsed time per iteration (s): 0.38 | learning rate: 1.982E-04 | global batch size: 256 | lm loss: 3.783500E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.819 | TFLOPs: 31.50 | 7: iteration 8500/ 115203 | consumed samples: 2176000 | consumed tokens: 4456448000 | elapsed time per iteration (s): 0.38 | learning rate: 1.982E-04 | global batch size: 256 | lm loss: 3.778068E+00 | grad norm: 0.377 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.619 | TFLOPs: 31.54 | 7: iteration 8600/ 115203 | consumed samples: 2201600 | consumed tokens: 4508876800 | elapsed time per iteration (s): 0.38 | learning rate: 1.981E-04 | global batch size: 256 | lm loss: 3.772711E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.080 | TFLOPs: 31.51 | 7: iteration 8700/ 115203 | consumed samples: 2227200 | consumed tokens: 4561305600 | elapsed time per iteration (s): 0.38 | learning rate: 1.981E-04 | global batch size: 256 | lm loss: 3.771110E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.307 | TFLOPs: 31.47 | 7: iteration 8800/ 115203 | consumed samples: 2252800 | consumed tokens: 4613734400 | elapsed time per iteration (s): 0.41 | learning rate: 1.980E-04 | global batch size: 256 | lm loss: 3.766192E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 628.507 | TFLOPs: 29.34 | 7: iteration 8900/ 115203 | consumed samples: 2278400 | consumed tokens: 4666163200 | elapsed time per iteration (s): 0.38 | learning rate: 1.980E-04 | global batch size: 256 | lm loss: 3.760788E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.149 | TFLOPs: 31.51 | 7: iteration 9000/ 115203 | consumed samples: 2304000 | consumed tokens: 4718592000 | elapsed time per iteration (s): 0.38 | learning rate: 1.979E-04 | global batch size: 256 | lm loss: 3.755876E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.380 | TFLOPs: 31.52 | 7: iteration 9100/ 115203 | consumed samples: 2329600 | consumed tokens: 4771020800 | elapsed time per iteration (s): 0.38 | learning rate: 1.979E-04 | global batch size: 256 | lm loss: 3.755887E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.168 | TFLOPs: 31.51 | 7: iteration 9200/ 115203 | consumed samples: 2355200 | consumed tokens: 4823449600 | elapsed time per iteration (s): 0.38 | learning rate: 1.978E-04 | global batch size: 256 | lm loss: 3.749453E+00 | grad norm: 0.363 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.030 | TFLOPs: 31.51 | 7: iteration 9300/ 115203 | consumed samples: 2380800 | consumed tokens: 4875878400 | elapsed time per iteration (s): 0.38 | learning rate: 1.977E-04 | global batch size: 256 | lm loss: 3.745607E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.188 | TFLOPs: 31.52 | 7: iteration 9400/ 115203 | consumed samples: 2406400 | consumed tokens: 4928307200 | elapsed time per iteration (s): 0.39 | learning rate: 1.977E-04 | global batch size: 256 | lm loss: 3.742119E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 650.996 | TFLOPs: 30.39 | 7: iteration 9500/ 115203 | consumed samples: 2432000 | consumed tokens: 4980736000 | elapsed time per iteration (s): 0.38 | learning rate: 1.976E-04 | global batch size: 256 | lm loss: 3.739066E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.038 | TFLOPs: 31.51 | 7: iteration 9600/ 115203 | consumed samples: 2457600 | consumed tokens: 5033164800 | elapsed time per iteration (s): 0.38 | learning rate: 1.976E-04 | global batch size: 256 | lm loss: 3.735552E+00 | grad norm: 0.332 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 666.361 | TFLOPs: 31.10 | 7: iteration 9700/ 115203 | consumed samples: 2483200 | consumed tokens: 5085593600 | elapsed time per iteration (s): 0.38 | learning rate: 1.975E-04 | global batch size: 256 | lm loss: 3.733088E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.412 | TFLOPs: 31.39 | 7: iteration 9800/ 115203 | consumed samples: 2508800 | consumed tokens: 5138022400 | elapsed time per iteration (s): 0.38 | learning rate: 1.975E-04 | global batch size: 256 | lm loss: 3.728184E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.674 | TFLOPs: 31.49 | 7: iteration 9900/ 115203 | consumed samples: 2534400 | consumed tokens: 5190451200 | elapsed time per iteration (s): 0.38 | learning rate: 1.974E-04 | global batch size: 256 | lm loss: 3.724267E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.824 | TFLOPs: 31.17 | 0: [2023-03-17 12:00:04,676] [INFO] [logging.py:68:log_dist] [Rank 0] step=10000, skipped=0, lr=[0.00019734023411853413, 0.00019734023411853413, 0.00019734023411853413], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 10000/ 115203 | consumed samples: 2560000 | consumed tokens: 5242880000 | elapsed time per iteration (s): 0.40 | learning rate: 1.973E-04 | global batch size: 256 | lm loss: 3.723795E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 646.476 | TFLOPs: 30.18 | 0: steps: 10000 loss: 3.7143 iter time (s): 0.383 samples/sec: 668.023 7: ------------------------------------------------------------------------------------------------ 7: validation loss at iteration 10000 | lm loss value: 3.918742E+00 | lm loss PPL: 5.033708E+01 | 7: ------------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 10000 to checkpoints_146m60b100mdedup 0: [2023-03-17 12:00:04,884] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step10000 is begin to save! 0: [2023-03-17 12:00:05,564] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_01-model_00-model_states.pt... 0: [2023-03-17 12:00:05,670] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_01-model_00-model_states.pt. 0: [2023-03-17 12:00:05,670] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_03-model_00-model_states.pt... 0: [2023-03-17 12:00:05,686] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_03-model_00-model_states.pt. 0: [2023-03-17 12:00:05,687] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_04-model_00-model_states.pt... 0: [2023-03-17 12:00:05,703] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_04-model_00-model_states.pt. 0: [2023-03-17 12:00:05,703] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_05-model_00-model_states.pt... 0: [2023-03-17 12:00:05,718] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_05-model_00-model_states.pt. 0: [2023-03-17 12:00:05,719] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_06-model_00-model_states.pt... 0: [2023-03-17 12:00:05,734] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_06-model_00-model_states.pt. 0: [2023-03-17 12:00:05,734] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_07-model_00-model_states.pt... 0: [2023-03-17 12:00:05,749] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_07-model_00-model_states.pt. 0: [2023-03-17 12:00:05,750] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_08-model_00-model_states.pt... 0: [2023-03-17 12:00:05,765] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_08-model_00-model_states.pt. 0: [2023-03-17 12:00:05,765] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_09-model_00-model_states.pt... 0: [2023-03-17 12:00:05,780] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_09-model_00-model_states.pt. 0: [2023-03-17 12:00:05,781] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_10-model_00-model_states.pt... 0: [2023-03-17 12:00:05,796] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_10-model_00-model_states.pt. 0: [2023-03-17 12:00:05,796] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_11-model_00-model_states.pt... 0: [2023-03-17 12:00:05,811] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_11-model_00-model_states.pt. 0: [2023-03-17 12:00:05,811] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_12-model_00-model_states.pt... 0: [2023-03-17 12:00:05,827] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_12-model_00-model_states.pt. 0: [2023-03-17 12:00:05,827] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_13-model_00-model_states.pt... 0: [2023-03-17 12:00:05,842] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_13-model_00-model_states.pt. 0: [2023-03-17 12:00:05,842] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_14-model_00-model_states.pt... 0: [2023-03-17 12:00:05,858] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_14-model_00-model_states.pt. 0: [2023-03-17 12:00:05,858] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_15-model_00-model_states.pt... 0: [2023-03-17 12:00:05,873] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_15-model_00-model_states.pt. 0: [2023-03-17 12:00:05,873] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_16-model_00-model_states.pt... 0: [2023-03-17 12:00:05,889] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_16-model_00-model_states.pt. 0: [2023-03-17 12:00:05,889] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_17-model_00-model_states.pt... 0: [2023-03-17 12:00:05,904] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_17-model_00-model_states.pt. 0: [2023-03-17 12:00:05,904] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/layer_19-model_00-model_states.pt... 0: [2023-03-17 12:00:05,905] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/layer_19-model_00-model_states.pt. 0: [2023-03-17 12:00:05,906] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_146m60b100mdedup/global_step10000/mp_rank_00_model_states.pt 0: [2023-03-17 12:00:05,906] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/mp_rank_00_model_states.pt... 0: [2023-03-17 12:00:05,909] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/mp_rank_00_model_states.pt. 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 3: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 6: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 5: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2023-03-17 12:00:05,928] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 0: [2023-03-17 12:00:05,964] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,964] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,964] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:05,965] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,965] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,965] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:05,966] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,966] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,966] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:05,966] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,966] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,966] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:05,967] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,968] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,968] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:05,969] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,969] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,969] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:05,970] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,977] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-17 12:00:05,977] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:05,977] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2023-03-17 12:00:05,983] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2023-03-17 12:00:05,983] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 12:00:05,995] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 12:00:06,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 12:00:06,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 12:00:06,005] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2023-03-17 12:00:06,005] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 12:00:06,006] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 12:00:06,018] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2023-03-17 12:00:06,018] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step10000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2023-03-17 12:00:06,018] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: successfully saved checkpoint at iteration 10000 to checkpoints_146m60b100mdedup 7: time (ms) | save-checkpoint: 1154.57 7: iteration 10100/ 115203 | consumed samples: 2585600 | consumed tokens: 5295308800 | elapsed time per iteration (s): 0.40 | learning rate: 1.973E-04 | global batch size: 256 | lm loss: 3.716859E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 644.670 | TFLOPs: 30.09 | 7: iteration 10200/ 115203 | consumed samples: 2611200 | consumed tokens: 5347737600 | elapsed time per iteration (s): 0.39 | learning rate: 1.972E-04 | global batch size: 256 | lm loss: 3.713563E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 664.641 | TFLOPs: 31.02 | 7: iteration 10300/ 115203 | consumed samples: 2636800 | consumed tokens: 5400166400 | elapsed time per iteration (s): 0.39 | learning rate: 1.972E-04 | global batch size: 256 | lm loss: 3.713238E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 664.643 | TFLOPs: 31.02 | 7: iteration 10400/ 115203 | consumed samples: 2662400 | consumed tokens: 5452595200 | elapsed time per iteration (s): 0.41 | learning rate: 1.971E-04 | global batch size: 256 | lm loss: 3.709554E+00 | grad norm: 0.297 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 622.377 | TFLOPs: 29.05 | 7: iteration 10500/ 115203 | consumed samples: 2688000 | consumed tokens: 5505024000 | elapsed time per iteration (s): 0.38 | learning rate: 1.970E-04 | global batch size: 256 | lm loss: 3.707072E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.954 | TFLOPs: 31.46 | 7: iteration 10600/ 115203 | consumed samples: 2713600 | consumed tokens: 5557452800 | elapsed time per iteration (s): 0.39 | learning rate: 1.970E-04 | global batch size: 256 | lm loss: 3.702128E+00 | grad norm: 0.336 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 651.765 | TFLOPs: 30.42 | 7: iteration 10700/ 115203 | consumed samples: 2739200 | consumed tokens: 5609881600 | elapsed time per iteration (s): 0.38 | learning rate: 1.969E-04 | global batch size: 256 | lm loss: 3.701193E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 676.021 | TFLOPs: 31.55 | 7: iteration 10800/ 115203 | consumed samples: 2764800 | consumed tokens: 5662310400 | elapsed time per iteration (s): 0.38 | learning rate: 1.968E-04 | global batch size: 256 | lm loss: 3.698191E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.151 | TFLOPs: 31.51 | 7: iteration 10900/ 115203 | consumed samples: 2790400 | consumed tokens: 5714739200 | elapsed time per iteration (s): 0.38 | learning rate: 1.968E-04 | global batch size: 256 | lm loss: 3.693488E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.397 | TFLOPs: 31.53 | 7: iteration 11000/ 115203 | consumed samples: 2816000 | consumed tokens: 5767168000 | elapsed time per iteration (s): 0.38 | learning rate: 1.967E-04 | global batch size: 256 | lm loss: 3.690057E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.380 | TFLOPs: 31.48 | 7: iteration 11100/ 115203 | consumed samples: 2841600 | consumed tokens: 5819596800 | elapsed time per iteration (s): 0.38 | learning rate: 1.966E-04 | global batch size: 256 | lm loss: 3.690394E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.916 | TFLOPs: 31.22 | 7: iteration 11200/ 115203 | consumed samples: 2867200 | consumed tokens: 5872025600 | elapsed time per iteration (s): 0.38 | learning rate: 1.966E-04 | global batch size: 256 | lm loss: 3.684731E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.229 | TFLOPs: 31.52 | 7: iteration 11300/ 115203 | consumed samples: 2892800 | consumed tokens: 5924454400 | elapsed time per iteration (s): 0.38 | learning rate: 1.965E-04 | global batch size: 256 | lm loss: 3.681752E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.916 | TFLOPs: 31.46 | 7: iteration 11400/ 115203 | consumed samples: 2918400 | consumed tokens: 5976883200 | elapsed time per iteration (s): 0.38 | learning rate: 1.964E-04 | global batch size: 256 | lm loss: 3.680923E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.562 | TFLOPs: 31.49 | 7: iteration 11500/ 115203 | consumed samples: 2944000 | consumed tokens: 6029312000 | elapsed time per iteration (s): 0.38 | learning rate: 1.964E-04 | global batch size: 256 | lm loss: 3.675646E+00 | grad norm: 0.316 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.455 | TFLOPs: 31.48 | 7: iteration 11600/ 115203 | consumed samples: 2969600 | consumed tokens: 6081740800 | elapsed time per iteration (s): 0.38 | learning rate: 1.963E-04 | global batch size: 256 | lm loss: 3.674940E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.276 | TFLOPs: 31.52 | 7: iteration 11700/ 115203 | consumed samples: 2995200 | consumed tokens: 6134169600 | elapsed time per iteration (s): 0.38 | learning rate: 1.962E-04 | global batch size: 256 | lm loss: 3.671588E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.872 | TFLOPs: 31.17 | 7: iteration 11800/ 115203 | consumed samples: 3020800 | consumed tokens: 6186598400 | elapsed time per iteration (s): 0.38 | learning rate: 1.962E-04 | global batch size: 256 | lm loss: 3.667109E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.902 | TFLOPs: 31.50 | 7: iteration 11900/ 115203 | consumed samples: 3046400 | consumed tokens: 6239027200 | elapsed time per iteration (s): 0.38 | learning rate: 1.961E-04 | global batch size: 256 | lm loss: 3.664449E+00 | grad norm: 0.317 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.242 | TFLOPs: 31.14 | 0: [2023-03-17 12:12:52,025] [INFO] [logging.py:68:log_dist] [Rank 0] step=12000, skipped=0, lr=[0.0001960118617437879, 0.0001960118617437879, 0.0001960118617437879], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 12000/ 115203 | consumed samples: 3072000 | consumed tokens: 6291456000 | elapsed time per iteration (s): 0.38 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 3.664099E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.755 | TFLOPs: 31.50 | 0: steps: 12000 loss: 3.6729 iter time (s): 0.381 samples/sec: 672.093 7: iteration 12100/ 115203 | consumed samples: 3097600 | consumed tokens: 6343884800 | elapsed time per iteration (s): 0.39 | learning rate: 1.959E-04 | global batch size: 256 | lm loss: 3.663014E+00 | grad norm: 0.293 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 661.608 | TFLOPs: 30.88 | 7: iteration 12200/ 115203 | consumed samples: 3123200 | consumed tokens: 6396313600 | elapsed time per iteration (s): 0.38 | learning rate: 1.959E-04 | global batch size: 256 | lm loss: 3.659792E+00 | grad norm: 0.293 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.184 | TFLOPs: 31.52 | 7: iteration 12300/ 115203 | consumed samples: 3148800 | consumed tokens: 6448742400 | elapsed time per iteration (s): 0.39 | learning rate: 1.958E-04 | global batch size: 256 | lm loss: 3.656906E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 663.459 | TFLOPs: 30.97 | 7: iteration 12400/ 115203 | consumed samples: 3174400 | consumed tokens: 6501171200 | elapsed time per iteration (s): 0.38 | learning rate: 1.957E-04 | global batch size: 256 | lm loss: 3.653582E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.382 | TFLOPs: 31.48 | 7: iteration 12500/ 115203 | consumed samples: 3200000 | consumed tokens: 6553600000 | elapsed time per iteration (s): 0.39 | learning rate: 1.956E-04 | global batch size: 256 | lm loss: 3.653144E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 663.746 | TFLOPs: 30.98 | 7: iteration 12600/ 115203 | consumed samples: 3225600 | consumed tokens: 6606028800 | elapsed time per iteration (s): 0.38 | learning rate: 1.956E-04 | global batch size: 256 | lm loss: 3.646703E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.953 | TFLOPs: 31.50 | 7: iteration 12700/ 115203 | consumed samples: 3251200 | consumed tokens: 6658457600 | elapsed time per iteration (s): 0.38 | learning rate: 1.955E-04 | global batch size: 256 | lm loss: 3.646646E+00 | grad norm: 0.342 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.989 | TFLOPs: 31.46 | 7: iteration 12800/ 115203 | consumed samples: 3276800 | consumed tokens: 6710886400 | elapsed time per iteration (s): 0.38 | learning rate: 1.954E-04 | global batch size: 256 | lm loss: 3.645308E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.064 | TFLOPs: 31.37 | 7: iteration 12900/ 115203 | consumed samples: 3302400 | consumed tokens: 6763315200 | elapsed time per iteration (s): 0.38 | learning rate: 1.953E-04 | global batch size: 256 | lm loss: 3.643418E+00 | grad norm: 0.301 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 675.066 | TFLOPs: 31.51 | 7: iteration 13000/ 115203 | consumed samples: 3328000 | consumed tokens: 6815744000 | elapsed time per iteration (s): 0.38 | learning rate: 1.952E-04 | global batch size: 256 | lm loss: 3.639825E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.826 | TFLOPs: 31.50 | 7: iteration 13100/ 115203 | consumed samples: 3353600 | consumed tokens: 6868172800 | elapsed time per iteration (s): 0.38 | learning rate: 1.952E-04 | global batch size: 256 | lm loss: 3.639529E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.397 | TFLOPs: 31.48 | 7: iteration 13200/ 115203 | consumed samples: 3379200 | consumed tokens: 6920601600 | elapsed time per iteration (s): 0.38 | learning rate: 1.951E-04 | global batch size: 256 | lm loss: 3.636766E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.905 | TFLOPs: 31.46 | 7: iteration 13300/ 115203 | consumed samples: 3404800 | consumed tokens: 6973030400 | elapsed time per iteration (s): 0.38 | learning rate: 1.950E-04 | global batch size: 256 | lm loss: 3.635894E+00 | grad norm: 0.293 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.721 | TFLOPs: 31.49 | 7: iteration 13400/ 115203 | consumed samples: 3430400 | consumed tokens: 7025459200 | elapsed time per iteration (s): 0.38 | learning rate: 1.949E-04 | global batch size: 256 | lm loss: 3.631686E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.685 | TFLOPs: 31.17 | 7: iteration 13500/ 115203 | consumed samples: 3456000 | consumed tokens: 7077888000 | elapsed time per iteration (s): 0.38 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 3.629592E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.080 | TFLOPs: 31.46 | 7: iteration 13600/ 115203 | consumed samples: 3481600 | consumed tokens: 7130316800 | elapsed time per iteration (s): 0.38 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 3.627485E+00 | grad norm: 0.363 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.478 | TFLOPs: 31.48 | 7: iteration 13700/ 115203 | consumed samples: 3507200 | consumed tokens: 7182745600 | elapsed time per iteration (s): 0.38 | learning rate: 1.947E-04 | global batch size: 256 | lm loss: 3.627159E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.833 | TFLOPs: 31.45 | 7: iteration 13800/ 115203 | consumed samples: 3532800 | consumed tokens: 7235174400 | elapsed time per iteration (s): 0.38 | learning rate: 1.946E-04 | global batch size: 256 | lm loss: 3.625024E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.549 | TFLOPs: 31.44 | 7: iteration 13900/ 115203 | consumed samples: 3558400 | consumed tokens: 7287603200 | elapsed time per iteration (s): 0.40 | learning rate: 1.945E-04 | global batch size: 256 | lm loss: 3.619801E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 632.912 | TFLOPs: 29.54 | 0: [2023-03-17 12:25:36,164] [INFO] [logging.py:68:log_dist] [Rank 0] step=14000, skipped=0, lr=[0.00019442251142812213, 0.00019442251142812213, 0.00019442251142812213], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 14000/ 115203 | consumed samples: 3584000 | consumed tokens: 7340032000 | elapsed time per iteration (s): 0.38 | learning rate: 1.944E-04 | global batch size: 256 | lm loss: 3.620380E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.675 | TFLOPs: 31.49 | 0: steps: 14000 loss: 3.6244 iter time (s): 0.380 samples/sec: 673.651 7: iteration 14100/ 115203 | consumed samples: 3609600 | consumed tokens: 7392460800 | elapsed time per iteration (s): 0.38 | learning rate: 1.943E-04 | global batch size: 256 | lm loss: 3.618795E+00 | grad norm: 0.306 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.146 | TFLOPs: 31.42 | 7: iteration 14200/ 115203 | consumed samples: 3635200 | consumed tokens: 7444889600 | elapsed time per iteration (s): 0.38 | learning rate: 1.942E-04 | global batch size: 256 | lm loss: 3.616685E+00 | grad norm: 0.301 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.376 | TFLOPs: 31.43 | 7: iteration 14300/ 115203 | consumed samples: 3660800 | consumed tokens: 7497318400 | elapsed time per iteration (s): 0.38 | learning rate: 1.942E-04 | global batch size: 256 | lm loss: 3.616653E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.968 | TFLOPs: 31.46 | 7: iteration 14400/ 115203 | consumed samples: 3686400 | consumed tokens: 7549747200 | elapsed time per iteration (s): 0.38 | learning rate: 1.941E-04 | global batch size: 256 | lm loss: 3.614366E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.469 | TFLOPs: 31.48 | 7: iteration 14500/ 115203 | consumed samples: 3712000 | consumed tokens: 7602176000 | elapsed time per iteration (s): 0.40 | learning rate: 1.940E-04 | global batch size: 256 | lm loss: 3.611660E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 644.798 | TFLOPs: 30.10 | 7: iteration 14600/ 115203 | consumed samples: 3737600 | consumed tokens: 7654604800 | elapsed time per iteration (s): 0.38 | learning rate: 1.939E-04 | global batch size: 256 | lm loss: 3.610718E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.336 | TFLOPs: 31.48 | 7: iteration 14700/ 115203 | consumed samples: 3763200 | consumed tokens: 7707033600 | elapsed time per iteration (s): 0.38 | learning rate: 1.938E-04 | global batch size: 256 | lm loss: 3.607474E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.061 | TFLOPs: 31.46 | 7: iteration 14800/ 115203 | consumed samples: 3788800 | consumed tokens: 7759462400 | elapsed time per iteration (s): 0.39 | learning rate: 1.937E-04 | global batch size: 256 | lm loss: 3.605043E+00 | grad norm: 0.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 661.351 | TFLOPs: 30.87 | 7: iteration 14900/ 115203 | consumed samples: 3814400 | consumed tokens: 7811891200 | elapsed time per iteration (s): 0.38 | learning rate: 1.936E-04 | global batch size: 256 | lm loss: 3.603040E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.610 | TFLOPs: 31.44 | 7: iteration 15000/ 115203 | consumed samples: 3840000 | consumed tokens: 7864320000 | elapsed time per iteration (s): 0.39 | learning rate: 1.935E-04 | global batch size: 256 | lm loss: 3.600201E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 659.409 | TFLOPs: 30.78 | 7: iteration 15100/ 115203 | consumed samples: 3865600 | consumed tokens: 7916748800 | elapsed time per iteration (s): 0.38 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 3.600391E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.361 | TFLOPs: 31.48 | 7: iteration 15200/ 115203 | consumed samples: 3891200 | consumed tokens: 7969177600 | elapsed time per iteration (s): 0.38 | learning rate: 1.933E-04 | global batch size: 256 | lm loss: 3.597588E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.334 | TFLOPs: 31.48 | 7: iteration 15300/ 115203 | consumed samples: 3916800 | consumed tokens: 8021606400 | elapsed time per iteration (s): 0.38 | learning rate: 1.933E-04 | global batch size: 256 | lm loss: 3.597310E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.335 | TFLOPs: 31.48 | 7: iteration 15400/ 115203 | consumed samples: 3942400 | consumed tokens: 8074035200 | elapsed time per iteration (s): 0.38 | learning rate: 1.932E-04 | global batch size: 256 | lm loss: 3.597124E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.942 | TFLOPs: 31.46 | 7: iteration 15500/ 115203 | consumed samples: 3968000 | consumed tokens: 8126464000 | elapsed time per iteration (s): 0.38 | learning rate: 1.931E-04 | global batch size: 256 | lm loss: 3.594730E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.661 | TFLOPs: 31.40 | 7: iteration 15600/ 115203 | consumed samples: 3993600 | consumed tokens: 8178892800 | elapsed time per iteration (s): 0.38 | learning rate: 1.930E-04 | global batch size: 256 | lm loss: 3.591522E+00 | grad norm: 0.367 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.567 | TFLOPs: 31.44 | 7: iteration 15700/ 115203 | consumed samples: 4019200 | consumed tokens: 8231321600 | elapsed time per iteration (s): 0.38 | learning rate: 1.929E-04 | global batch size: 256 | lm loss: 3.588038E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.511 | TFLOPs: 31.44 | 7: iteration 15800/ 115203 | consumed samples: 4044800 | consumed tokens: 8283750400 | elapsed time per iteration (s): 0.38 | learning rate: 1.928E-04 | global batch size: 256 | lm loss: 3.588437E+00 | grad norm: 0.319 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.225 | TFLOPs: 31.47 | 7: iteration 15900/ 115203 | consumed samples: 4070400 | consumed tokens: 8336179200 | elapsed time per iteration (s): 0.38 | learning rate: 1.927E-04 | global batch size: 256 | lm loss: 3.587399E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.145 | TFLOPs: 31.47 | 0: [2023-03-17 12:38:19,209] [INFO] [logging.py:68:log_dist] [Rank 0] step=16000, skipped=0, lr=[0.00019257700559212364, 0.00019257700559212364, 0.00019257700559212364], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 16000/ 115203 | consumed samples: 4096000 | consumed tokens: 8388608000 | elapsed time per iteration (s): 0.38 | learning rate: 1.926E-04 | global batch size: 256 | lm loss: 3.584383E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.961 | TFLOPs: 31.46 | 0: steps: 16000 loss: 3.5859 iter time (s): 0.379 samples/sec: 674.647 7: iteration 16100/ 115203 | consumed samples: 4121600 | consumed tokens: 8441036800 | elapsed time per iteration (s): 0.38 | learning rate: 1.925E-04 | global batch size: 256 | lm loss: 3.581913E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.516 | TFLOPs: 31.48 | 7: iteration 16200/ 115203 | consumed samples: 4147200 | consumed tokens: 8493465600 | elapsed time per iteration (s): 0.38 | learning rate: 1.924E-04 | global batch size: 256 | lm loss: 3.585553E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.063 | TFLOPs: 31.46 | 7: iteration 16300/ 115203 | consumed samples: 4172800 | consumed tokens: 8545894400 | elapsed time per iteration (s): 0.38 | learning rate: 1.923E-04 | global batch size: 256 | lm loss: 3.581152E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.457 | TFLOPs: 31.48 | 7: iteration 16400/ 115203 | consumed samples: 4198400 | consumed tokens: 8598323200 | elapsed time per iteration (s): 0.38 | learning rate: 1.922E-04 | global batch size: 256 | lm loss: 3.584376E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.258 | TFLOPs: 31.47 | 7: iteration 16500/ 115203 | consumed samples: 4224000 | consumed tokens: 8650752000 | elapsed time per iteration (s): 0.38 | learning rate: 1.921E-04 | global batch size: 256 | lm loss: 3.578944E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.254 | TFLOPs: 31.43 | 7: iteration 16600/ 115203 | consumed samples: 4249600 | consumed tokens: 8703180800 | elapsed time per iteration (s): 0.38 | learning rate: 1.920E-04 | global batch size: 256 | lm loss: 3.576777E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.436 | TFLOPs: 31.48 | 7: iteration 16700/ 115203 | consumed samples: 4275200 | consumed tokens: 8755609600 | elapsed time per iteration (s): 0.38 | learning rate: 1.919E-04 | global batch size: 256 | lm loss: 3.574200E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.857 | TFLOPs: 31.45 | 7: iteration 16800/ 115203 | consumed samples: 4300800 | consumed tokens: 8808038400 | elapsed time per iteration (s): 0.38 | learning rate: 1.918E-04 | global batch size: 256 | lm loss: 3.572304E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.191 | TFLOPs: 31.47 | 7: iteration 16900/ 115203 | consumed samples: 4326400 | consumed tokens: 8860467200 | elapsed time per iteration (s): 0.38 | learning rate: 1.917E-04 | global batch size: 256 | lm loss: 3.572963E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.001 | TFLOPs: 31.46 | 7: iteration 17000/ 115203 | consumed samples: 4352000 | consumed tokens: 8912896000 | elapsed time per iteration (s): 0.38 | learning rate: 1.916E-04 | global batch size: 256 | lm loss: 3.571121E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.243 | TFLOPs: 31.47 | 7: iteration 17100/ 115203 | consumed samples: 4377600 | consumed tokens: 8965324800 | elapsed time per iteration (s): 0.39 | learning rate: 1.915E-04 | global batch size: 256 | lm loss: 3.569434E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 656.166 | TFLOPs: 30.63 | 7: iteration 17200/ 115203 | consumed samples: 4403200 | consumed tokens: 9017753600 | elapsed time per iteration (s): 0.38 | learning rate: 1.913E-04 | global batch size: 256 | lm loss: 3.570282E+00 | grad norm: 0.347 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.580 | TFLOPs: 31.49 | 7: iteration 17300/ 115203 | consumed samples: 4428800 | consumed tokens: 9070182400 | elapsed time per iteration (s): 0.38 | learning rate: 1.912E-04 | global batch size: 256 | lm loss: 3.567252E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 665.817 | TFLOPs: 31.08 | 7: iteration 17400/ 115203 | consumed samples: 4454400 | consumed tokens: 9122611200 | elapsed time per iteration (s): 0.38 | learning rate: 1.911E-04 | global batch size: 256 | lm loss: 3.562468E+00 | grad norm: 0.276 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.580 | TFLOPs: 31.44 | 7: iteration 17500/ 115203 | consumed samples: 4480000 | consumed tokens: 9175040000 | elapsed time per iteration (s): 0.39 | learning rate: 1.910E-04 | global batch size: 256 | lm loss: 3.562297E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 653.464 | TFLOPs: 30.50 | 7: iteration 17600/ 115203 | consumed samples: 4505600 | consumed tokens: 9227468800 | elapsed time per iteration (s): 0.38 | learning rate: 1.909E-04 | global batch size: 256 | lm loss: 3.561473E+00 | grad norm: 0.276 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.656 | TFLOPs: 31.44 | 7: iteration 17700/ 115203 | consumed samples: 4531200 | consumed tokens: 9279897600 | elapsed time per iteration (s): 0.38 | learning rate: 1.908E-04 | global batch size: 256 | lm loss: 3.561608E+00 | grad norm: 0.264 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.384 | TFLOPs: 31.48 | 7: iteration 17800/ 115203 | consumed samples: 4556800 | consumed tokens: 9332326400 | elapsed time per iteration (s): 0.40 | learning rate: 1.907E-04 | global batch size: 256 | lm loss: 3.558945E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 646.709 | TFLOPs: 30.19 | 7: iteration 17900/ 115203 | consumed samples: 4582400 | consumed tokens: 9384755200 | elapsed time per iteration (s): 0.38 | learning rate: 1.906E-04 | global batch size: 256 | lm loss: 3.557624E+00 | grad norm: 0.304 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.774 | TFLOPs: 31.50 | 0: [2023-03-17 12:51:03,022] [INFO] [logging.py:68:log_dist] [Rank 0] step=18000, skipped=0, lr=[0.00019048094388569267, 0.00019048094388569267, 0.00019048094388569267], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 18000/ 115203 | consumed samples: 4608000 | consumed tokens: 9437184000 | elapsed time per iteration (s): 0.38 | learning rate: 1.905E-04 | global batch size: 256 | lm loss: 3.556437E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.009 | TFLOPs: 31.46 | 0: steps: 18000 loss: 3.5703 iter time (s): 0.380 samples/sec: 674.099 7: iteration 18100/ 115203 | consumed samples: 4633600 | consumed tokens: 9489612800 | elapsed time per iteration (s): 0.38 | learning rate: 1.904E-04 | global batch size: 256 | lm loss: 3.555786E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 665.468 | TFLOPs: 31.06 | 7: iteration 18200/ 115203 | consumed samples: 4659200 | consumed tokens: 9542041600 | elapsed time per iteration (s): 0.38 | learning rate: 1.903E-04 | global batch size: 256 | lm loss: 3.557588E+00 | grad norm: 0.276 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.519 | TFLOPs: 31.48 | 7: iteration 18300/ 115203 | consumed samples: 4684800 | consumed tokens: 9594470400 | elapsed time per iteration (s): 0.40 | learning rate: 1.901E-04 | global batch size: 256 | lm loss: 3.554680E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 633.406 | TFLOPs: 29.57 | 7: iteration 18400/ 115203 | consumed samples: 4710400 | consumed tokens: 9646899200 | elapsed time per iteration (s): 0.38 | learning rate: 1.900E-04 | global batch size: 256 | lm loss: 3.551351E+00 | grad norm: 0.333 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.556 | TFLOPs: 31.49 | 7: iteration 18500/ 115203 | consumed samples: 4736000 | consumed tokens: 9699328000 | elapsed time per iteration (s): 0.38 | learning rate: 1.899E-04 | global batch size: 256 | lm loss: 3.548786E+00 | grad norm: 0.297 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.493 | TFLOPs: 31.48 | 7: iteration 18600/ 115203 | consumed samples: 4761600 | consumed tokens: 9751756800 | elapsed time per iteration (s): 0.40 | learning rate: 1.898E-04 | global batch size: 256 | lm loss: 3.547783E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 643.551 | TFLOPs: 30.04 | 7: iteration 18700/ 115203 | consumed samples: 4787200 | consumed tokens: 9804185600 | elapsed time per iteration (s): 0.40 | learning rate: 1.897E-04 | global batch size: 256 | lm loss: 3.548416E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 645.196 | TFLOPs: 30.12 | 7: iteration 18800/ 115203 | consumed samples: 4812800 | consumed tokens: 9856614400 | elapsed time per iteration (s): 0.38 | learning rate: 1.896E-04 | global batch size: 256 | lm loss: 3.545468E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.184 | TFLOPs: 31.19 | 7: iteration 18900/ 115203 | consumed samples: 4838400 | consumed tokens: 9909043200 | elapsed time per iteration (s): 0.38 | learning rate: 1.895E-04 | global batch size: 256 | lm loss: 3.546356E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.519 | TFLOPs: 31.48 | 7: iteration 19000/ 115203 | consumed samples: 4864000 | consumed tokens: 9961472000 | elapsed time per iteration (s): 0.39 | learning rate: 1.893E-04 | global batch size: 256 | lm loss: 3.544450E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 664.910 | TFLOPs: 31.04 | 7: iteration 19100/ 115203 | consumed samples: 4889600 | consumed tokens: 10013900800 | elapsed time per iteration (s): 0.38 | learning rate: 1.892E-04 | global batch size: 256 | lm loss: 3.542445E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.031 | TFLOPs: 31.46 | 7: iteration 19200/ 115203 | consumed samples: 4915200 | consumed tokens: 10066329600 | elapsed time per iteration (s): 0.40 | learning rate: 1.891E-04 | global batch size: 256 | lm loss: 3.541938E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 633.211 | TFLOPs: 29.56 | 7: iteration 19300/ 115203 | consumed samples: 4940800 | consumed tokens: 10118758400 | elapsed time per iteration (s): 0.38 | learning rate: 1.890E-04 | global batch size: 256 | lm loss: 3.539992E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.885 | TFLOPs: 31.50 | 7: iteration 19400/ 115203 | consumed samples: 4966400 | consumed tokens: 10171187200 | elapsed time per iteration (s): 0.38 | learning rate: 1.889E-04 | global batch size: 256 | lm loss: 3.538345E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.227 | TFLOPs: 31.42 | 7: iteration 19500/ 115203 | consumed samples: 4992000 | consumed tokens: 10223616000 | elapsed time per iteration (s): 0.38 | learning rate: 1.887E-04 | global batch size: 256 | lm loss: 3.537169E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.496 | TFLOPs: 31.39 | 7: iteration 19600/ 115203 | consumed samples: 5017600 | consumed tokens: 10276044800 | elapsed time per iteration (s): 0.38 | learning rate: 1.886E-04 | global batch size: 256 | lm loss: 3.534279E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.802 | TFLOPs: 31.22 | 7: iteration 19700/ 115203 | consumed samples: 5043200 | consumed tokens: 10328473600 | elapsed time per iteration (s): 0.38 | learning rate: 1.885E-04 | global batch size: 256 | lm loss: 3.534193E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.180 | TFLOPs: 31.37 | 7: iteration 19800/ 115203 | consumed samples: 5068800 | consumed tokens: 10380902400 | elapsed time per iteration (s): 0.40 | learning rate: 1.884E-04 | global batch size: 256 | lm loss: 3.534501E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 638.400 | TFLOPs: 29.80 | 7: iteration 19900/ 115203 | consumed samples: 5094400 | consumed tokens: 10433331200 | elapsed time per iteration (s): 0.38 | learning rate: 1.883E-04 | global batch size: 256 | lm loss: 3.534697E+00 | grad norm: 0.373 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.147 | TFLOPs: 31.37 | 0: [2023-03-17 13:03:55,687] [INFO] [logging.py:68:log_dist] [Rank 0] step=20000, skipped=0, lr=[0.00018814068619753637, 0.00018814068619753637, 0.00018814068619753637], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 20000/ 115203 | consumed samples: 5120000 | consumed tokens: 10485760000 | elapsed time per iteration (s): 0.39 | learning rate: 1.881E-04 | global batch size: 256 | lm loss: 3.531132E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 661.099 | TFLOPs: 30.86 | 0: steps: 20000 loss: 3.5065 iter time (s): 0.384 samples/sec: 666.354 7: ------------------------------------------------------------------------------------------------ 7: validation loss at iteration 20000 | lm loss value: 3.887379E+00 | lm loss PPL: 4.878288E+01 | 7: ------------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 20000 to checkpoints_146m60b100mdedup 0: [2023-03-17 13:03:55,848] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step20000 is begin to save! 0: [2023-03-17 13:03:58,937] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_01-model_00-model_states.pt... 0: [2023-03-17 13:03:59,035] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_01-model_00-model_states.pt. 0: [2023-03-17 13:03:59,036] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_03-model_00-model_states.pt... 0: [2023-03-17 13:03:59,052] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_03-model_00-model_states.pt. 0: [2023-03-17 13:03:59,052] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_04-model_00-model_states.pt... 0: [2023-03-17 13:03:59,067] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_04-model_00-model_states.pt. 0: [2023-03-17 13:03:59,067] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_05-model_00-model_states.pt... 0: [2023-03-17 13:03:59,083] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_05-model_00-model_states.pt. 0: [2023-03-17 13:03:59,083] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_06-model_00-model_states.pt... 0: [2023-03-17 13:03:59,099] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_06-model_00-model_states.pt. 0: [2023-03-17 13:03:59,099] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_07-model_00-model_states.pt... 0: [2023-03-17 13:03:59,114] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_07-model_00-model_states.pt. 0: [2023-03-17 13:03:59,114] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_08-model_00-model_states.pt... 0: [2023-03-17 13:03:59,130] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_08-model_00-model_states.pt. 0: [2023-03-17 13:03:59,130] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_09-model_00-model_states.pt... 0: [2023-03-17 13:03:59,145] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_09-model_00-model_states.pt. 0: [2023-03-17 13:03:59,146] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_10-model_00-model_states.pt... 0: [2023-03-17 13:03:59,161] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_10-model_00-model_states.pt. 0: [2023-03-17 13:03:59,161] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_11-model_00-model_states.pt... 0: [2023-03-17 13:03:59,176] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_11-model_00-model_states.pt. 0: [2023-03-17 13:03:59,177] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_12-model_00-model_states.pt... 0: [2023-03-17 13:03:59,192] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_12-model_00-model_states.pt. 0: [2023-03-17 13:03:59,192] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_13-model_00-model_states.pt... 0: [2023-03-17 13:03:59,208] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_13-model_00-model_states.pt. 0: [2023-03-17 13:03:59,208] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_14-model_00-model_states.pt... 0: [2023-03-17 13:03:59,223] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_14-model_00-model_states.pt. 0: [2023-03-17 13:03:59,223] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_15-model_00-model_states.pt... 0: [2023-03-17 13:03:59,238] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_15-model_00-model_states.pt. 0: [2023-03-17 13:03:59,239] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_16-model_00-model_states.pt... 0: [2023-03-17 13:03:59,254] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_16-model_00-model_states.pt. 0: [2023-03-17 13:03:59,254] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_17-model_00-model_states.pt... 0: [2023-03-17 13:03:59,269] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_17-model_00-model_states.pt. 0: [2023-03-17 13:03:59,270] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/layer_19-model_00-model_states.pt... 0: [2023-03-17 13:03:59,271] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/layer_19-model_00-model_states.pt. 0: [2023-03-17 13:03:59,271] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_146m60b100mdedup/global_step20000/mp_rank_00_model_states.pt 0: [2023-03-17 13:03:59,272] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/mp_rank_00_model_states.pt... 0: [2023-03-17 13:03:59,274] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/mp_rank_00_model_states.pt. 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:59,292] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:59,329] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,330] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,330] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,330] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,330] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,330] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,330] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,330] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,331] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,331] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,331] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,331] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,332] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,332] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,332] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,335] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,335] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,335] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,336] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,336] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,337] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,337] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:59,337] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,337] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,339] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,339] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,339] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,339] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,339] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,339] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,343] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,343] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,343] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:59,343] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:59,343] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:59,343] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:59,348] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:59,356] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:59,356] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:59,357] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:59,364] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m60b100mdedup/global_step20000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:59,366] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: successfully saved checkpoint at iteration 20000 to checkpoints_146m60b100mdedup 7: time (ms) | save-checkpoint: 3549.18 7: iteration 20100/ 115203 | consumed samples: 5145600 | consumed tokens: 10538188800 | elapsed time per iteration (s): 0.49 | learning rate: 1.880E-04 | global batch size: 256 | lm loss: 3.532430E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 523.588 | TFLOPs: 24.44 | 7: iteration 20200/ 115203 | consumed samples: 5171200 | consumed tokens: 10590617600 | elapsed time per iteration (s): 0.39 | learning rate: 1.879E-04 | global batch size: 256 | lm loss: 3.529409E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 656.876 | TFLOPs: 30.66 | 7: iteration 20300/ 115203 | consumed samples: 5196800 | consumed tokens: 10643046400 | elapsed time per iteration (s): 0.40 | learning rate: 1.878E-04 | global batch size: 256 | lm loss: 3.528116E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 645.777 | TFLOPs: 30.14 | 7: iteration 20400/ 115203 | consumed samples: 5222400 | consumed tokens: 10695475200 | elapsed time per iteration (s): 0.39 | learning rate: 1.876E-04 | global batch size: 256 | lm loss: 3.530347E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 664.095 | TFLOPs: 31.00 | 7: iteration 20500/ 115203 | consumed samples: 5248000 | consumed tokens: 10747904000 | elapsed time per iteration (s): 0.38 | learning rate: 1.875E-04 | global batch size: 256 | lm loss: 3.525868E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 664.944 | TFLOPs: 31.04 | 7: iteration 20600/ 115203 | consumed samples: 5273600 | consumed tokens: 10800332800 | elapsed time per iteration (s): 0.38 | learning rate: 1.874E-04 | global batch size: 256 | lm loss: 3.525795E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 666.472 | TFLOPs: 31.11 | 7: iteration 20700/ 115203 | consumed samples: 5299200 | consumed tokens: 10852761600 | elapsed time per iteration (s): 0.38 | learning rate: 1.873E-04 | global batch size: 256 | lm loss: 3.524839E+00 | grad norm: 0.256 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 667.465 | TFLOPs: 31.15 | 7: iteration 20800/ 115203 | consumed samples: 5324800 | consumed tokens: 10905190400 | elapsed time per iteration (s): 0.38 | learning rate: 1.871E-04 | global batch size: 256 | lm loss: 3.521103E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 665.242 | TFLOPs: 31.05 | 7: iteration 20900/ 115203 | consumed samples: 5350400 | consumed tokens: 10957619200 | elapsed time per iteration (s): 0.38 | learning rate: 1.870E-04 | global batch size: 256 | lm loss: 3.521970E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 666.336 | TFLOPs: 31.10 |