Model parameters: d_model 768 ffw_size 3072 kv_size 64 n_heads 12 n_layers 15 Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 15 --hidden-size 768 --num-attention-heads 12 --kv-channels 64 --ffn-hidden-size 3072 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 4 --global-batch-size 256 --train-samples 84_762_549 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-146m174b100mdedup --bf16 --checkpoint-activations --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 84_762_549 --lr-warmup-samples 847_625 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 100 --save-interval 10000 --eval-interval 10000 --eval-iters 1 --tensorboard-dir tensorboard_146m174b100mdedup --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_146m174b100mdedup --load checkpoints_146m174b100mdedup --train-weighted-split-paths-path train100mdedup.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3328806.json --zero-stage 0 START 3328806: Fri 17 Mar 2023 10:52:41 AM EET 2: 2: 2: ======================= ROCm System Management Interface ======================= 2: ================================= Concise Info ================================= 2: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 2: 0 40.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 1 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 2 38.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 4 48.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 5 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 6 44.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: ================================================================================ 2: ============================= End of ROCm SMI Log ============================== 0: 0: 0: ======================= ROCm System Management Interface ======================= 0: ================================= Concise Info ================================= 0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0: 0 50.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 1 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 2 39.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 3 39.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 4 37.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 5 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 6 39.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: ================================================================================ 0: ============================= End of ROCm SMI Log ============================== 3: 3: 3: ======================= ROCm System Management Interface ======================= 3: ================================= Concise Info ================================= 3: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 3: 0 45.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 2 40.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 3 41.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 4 41.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 5 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 6 43.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 7 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: ================================================================================ 3: ============================= End of ROCm SMI Log ============================== 1: 1: 1: ======================= ROCm System Management Interface ======================= 1: ================================= Concise Info ================================= 1: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 1: 0 43.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 1 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 2 40.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 3 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 4 45.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 5 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 6 41.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: ================================================================================ 1: ============================= End of ROCm SMI Log ============================== 7: 7: 7: ======================= ROCm System Management Interface ======================= 7: ================================= Concise Info ================================= 7: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 7: 0 47.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 2 46.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 4 39.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 5 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 6 48.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 7 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: ================================================================================ 7: ============================= End of ROCm SMI Log ============================== 5: 5: 5: ======================= ROCm System Management Interface ======================= 5: ================================= Concise Info ================================= 5: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 5: 0 44.0c 97.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 2 38.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 3 38.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 4 47.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 5 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 6 41.0c 90.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 7 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: ================================================================================ 5: ============================= End of ROCm SMI Log ============================== 4: 4: 4: ======================= ROCm System Management Interface ======================= 4: ================================= Concise Info ================================= 4: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 4: 0 42.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 1 38.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 2 40.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 3 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 4 39.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 5 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 6 41.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 7 41.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: ================================================================================ 4: ============================= End of ROCm SMI Log ============================== 6: 6: 6: ======================= ROCm System Management Interface ======================= 6: ================================= Concise Info ================================= 6: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 6: 0 45.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 2 43.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 4 43.0c 83.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 5 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 6 35.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 7 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: ================================================================================ 6: ============================= End of ROCm SMI Log ============================== 2: Launching on nid006547 (2/8), master nid006545 port 9999, GPUs 8, CUDA: True 7: Launching on nid006552 (7/8), master nid006545 port 9999, GPUs 8, CUDA: True 1: Launching on nid006546 (1/8), master nid006545 port 9999, GPUs 8, CUDA: True 6: Launching on nid006551 (6/8), master nid006545 port 9999, GPUs 8, CUDA: True 5: Launching on nid006550 (5/8), master nid006545 port 9999, GPUs 8, CUDA: True 4: Launching on nid006549 (4/8), master nid006545 port 9999, GPUs 8, CUDA: True 3: Launching on nid006548 (3/8), master nid006545 port 9999, GPUs 8, CUDA: True 0: Launching on nid006545 (0/8), master nid006545 port 9999, GPUs 8, CUDA: True 0: using world size: 64, data-parallel-size: 64, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 0: using torch.bfloat16 for parameters ... 0: ------------------------ arguments ------------------------ 0: abort_on_unmet_fused_kernel_constraints ......... False 0: accumulate_allreduce_grads_in_fp32 .............. True 0: adam_beta1 ...................................... 0.9 0: adam_beta2 ...................................... 0.999 0: adam_eps ........................................ 1e-08 0: adlr_autoresume ................................. False 0: adlr_autoresume_interval ........................ 1000 0: apply_query_key_layer_scaling ................... True 0: apply_residual_connection_post_layernorm ........ False 0: attention_dropout ............................... 0.1 0: attention_softmax_in_fp32 ....................... False 0: bert_binary_head ................................ True 0: bert_load ....................................... None 0: bf16 ............................................ True 0: bias_dropout_fusion ............................. True 0: bias_gelu_fusion ................................ True 0: biencoder_projection_dim ........................ 0 0: biencoder_shared_query_context_model ............ False 0: block_data_path ................................. None 0: checkpoint_activations .......................... True 0: checkpoint_in_cpu ............................... False 0: checkpoint_num_layers ........................... 1 0: clip_grad ....................................... 1.0 0: codecarbon_dir .................................. None 0: consumed_train_samples .......................... 0 0: consumed_train_tokens ........................... 0 0: consumed_valid_samples .......................... 0 0: contigious_checkpointing ........................ False 0: cpu_optimizer ................................... False 0: cpu_torch_adam .................................. False 0: curriculum_learning ............................. False 0: data_impl ....................................... mmap 0: data_parallel_size .............................. 64 0: data_path ....................................... None 0: dataloader_type ................................. single 0: DDP_impl ........................................ local 0: decoder_seq_length .............................. None 0: deepscale ....................................... False 0: deepscale_config ................................ None 0: deepspeed ....................................... True 0: deepspeed_activation_checkpointing .............. False 0: deepspeed_config ................................ ds_configs/3328806.json 0: deepspeed_mpi ................................... False 0: distribute_checkpointed_activations ............. False 0: distributed_backend ............................. nccl 0: embed_layernorm ................................. False 0: embedding_path .................................. None 0: encoder_seq_length .............................. 2048 0: eod_mask_loss ................................... False 0: eval_interval ................................... 10000 0: eval_iters ...................................... 1 0: eval_only ....................................... None 0: evidence_data_path .............................. None 0: exit_duration_in_mins ........................... None 0: exit_interval ................................... None 0: ffn_hidden_size ................................. 3072 0: finetune ........................................ False 0: fp16 ............................................ False 0: fp16_lm_cross_entropy ........................... False 0: fp32_residual_connection ........................ False 0: gigaflos_no_embeds .............................. 0 0: global_batch_size ............................... 256 0: glu_activation .................................. None 0: hidden_dropout .................................. 0.1 0: hidden_size ..................................... 768 0: hysteresis ...................................... 2 0: ict_head_size ................................... None 0: ict_load ........................................ None 0: img_dim ......................................... 224 0: indexer_batch_size .............................. 128 0: indexer_log_interval ............................ 1000 0: inference ....................................... False 0: init_method_std ................................. 0.02 0: init_method_xavier_uniform ...................... False 0: initial_loss_scale .............................. 4294967296 0: kill_switch_path ................................ kill-switch-146m174b100mdedup 0: kv_channels ..................................... 64 0: layer_norm_fusion ............................... True 0: layernorm_epsilon ............................... 1e-05 0: lazy_mpu_init ................................... None 0: load ............................................ checkpoints_146m174b100mdedup 0: local_rank ...................................... None 0: log_batch_size_to_tensorboard ................... True 0: log_interval .................................... 100 0: log_learning_rate_to_tensorboard ................ True 0: log_level ....................................... None 0: log_level_replica ............................... None 0: log_loss_scale_to_tensorboard ................... True 0: log_num_zeros_in_grad ........................... False 0: log_params_norm ................................. False 0: log_path ........................................ None 0: log_timers_to_tensorboard ....................... True 0: log_validation_ppl_to_tensorboard ............... True 0: loss_on_targets_only ............................ False 0: loss_scale ...................................... 12.0 0: loss_scale_window ............................... 1000 0: lr .............................................. 0.0002 0: lr_decay_iters .................................. None 0: lr_decay_samples ................................ 84762549 0: lr_decay_style .................................. cosine 0: lr_decay_tokens ................................. None 0: lr_warmup_fraction .............................. None 0: lr_warmup_iters ................................. 0 0: lr_warmup_samples ............................... 847625 0: make_vocab_size_divisible_by .................... 128 0: mask_prob ....................................... 0.15 0: masked_softmax_fusion ........................... True 0: max_position_embeddings ......................... 2048 0: mean_noise_span_length .......................... None 0: memory_centric_tiled_linear ..................... False 0: merge_file ...................................... gpt2/merges.txt 0: micro_batch_size ................................ 4 0: min_loss_scale .................................. 1.0 0: min_lr .......................................... 2e-05 0: mmap_warmup ..................................... False 0: no_load_optim ................................... None 0: no_load_rng ..................................... None 0: no_save_optim ................................... None 0: no_save_rng ..................................... None 0: noise_density ................................... None 0: num_attention_heads ............................. 12 0: num_channels .................................... 3 0: num_classes ..................................... 1000 0: num_layers ...................................... 15 0: num_layers_per_virtual_pipeline_stage ........... None 0: num_workers ..................................... 2 0: onnx_safe ....................................... None 0: openai_gelu ..................................... False 0: optimizer ....................................... adam 0: optimizer_fusion ................................ True 0: override_lr_scheduler ........................... False 0: pad_vocab_size_to ............................... None 0: params_dtype .................................... torch.bfloat16 0: partition_activations ........................... False 0: patch_dim ....................................... 16 0: pipeline_model_parallel_size .................... 1 0: position_embedding_type ......................... PositionEmbeddingType.absolute 0: pp_partition_method ............................. None 0: profile_backward ................................ False 0: query_in_block_prob ............................. 0.1 0: rampup_batch_size ............................... None 0: rank ............................................ 0 0: remote_device ................................... none 0: reset_attention_mask ............................ False 0: reset_position_ids .............................. False 0: reset_progress .................................. None 0: retriever_report_topk_accuracies ................ [] 0: retriever_score_scaling ......................... False 0: retriever_seq_length ............................ 256 0: reweight_loss_based_on_position_frequency ....... False 0: sample_rate ..................................... 1.0 0: save ............................................ checkpoints_146m174b100mdedup 0: save_interval ................................... 10000 0: scatter_gather_tensors_in_pipeline .............. True 0: scattered_embeddings ............................ False 0: seed ............................................ 1234 0: seq_length ...................................... 2048 0: sgd_momentum .................................... 0.9 0: short_seq_prob .................................. 0.1 0: skip_train_iteration_range ...................... None 0: split ........................................... None 0: split_transformers .............................. False 0: sync_tp_duplicated_parameters ................... False 0: synchronize_each_layer .......................... False 0: tensor_model_parallel_size ...................... 1 0: tensorboard_dir ................................. tensorboard_146m174b100mdedup 0: tensorboard_log_interval ........................ 1 0: tensorboard_queue_size .......................... 5 0: test_weighted_split_paths ....................... None 0: test_weighted_split_paths_path .................. None 0: tile_factor ..................................... 1 0: titles_data_path ................................ None 0: tokenizer_name_or_path .......................... None 0: tokenizer_type .................................. GPT2BPETokenizer 0: train_iters ..................................... None 0: train_samples ................................... 84762549 0: train_tokens .................................... None 0: train_weighted_split_names ...................... ['train'] 0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document']] 0: train_weighted_split_paths_path ................. None 0: train_weighted_split_splits ..................... [['0:1']] 0: train_weighted_split_weights .................... [['1.0']] 0: universal_checkpoint ............................ False 0: use_bnb_optimizer ............................... False 0: use_checkpoint_lr_scheduler ..................... False 0: use_contiguous_buffers_in_ddp ................... True 0: use_cpu_initialization .......................... None 0: use_one_sent_docs ............................... False 0: use_pin_memory .................................. False 0: valid_num_workers ............................... 2 0: valid_weighted_split_names ...................... ['validation'] 0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']] 0: valid_weighted_split_paths_path ................. None 0: valid_weighted_split_splits ..................... [['0:1']] 0: valid_weighted_split_weights .................... [['1.0']] 0: virtual_pipeline_model_parallel_size ............ None 0: vocab_extra_ids ................................. 0 0: vocab_file ...................................... gpt2/vocab.json 0: weight_decay .................................... 0.1 0: world_size ...................................... 64 0: zero_allgather_bucket_size ...................... 0.0 0: zero_contigious_gradients ....................... False 0: zero_reduce_bucket_size ......................... 0.0 0: zero_reduce_scatter ............................. False 0: zero_stage ...................................... 0 0: -------------------- end of arguments --------------------- 0: setting number of micro-batches to constant 1 0: > building GPT2BPETokenizer tokenizer ... 0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) 0: DeepSpeed general environment info: 0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] 0: torch version .................... 1.13.0+rocm5.2 0: torch cuda version ............... None 0: torch hip version ................ 5.2.21151-afdc89f8 0: nvcc version ..................... None 0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] 0: deepspeed info ................... 0.7.5, unknown, unknown 0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 0: > initializing torch distributed ... 0: [2023-03-17 10:54:15,674] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 7: > setting tensorboard ... 0: > initializing tensor model parallel with size 1 0: > initializing pipeline model parallel with size 1 0: > setting random seeds to 1234 ... 0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 0: > compiling dataset index builder ... 0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: make: Nothing to be done for 'default'. 0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: >>> done with dataset index builder. Compilation time: 0.107 seconds 0: > compiling and loading fused kernels ... 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 102 0: [1/1] c++ scaled_masked_softmax_hip.o scaled_masked_softmax_hip.cuda.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/pfs/lustrep2/projappl/project_462000125/samantao-public/rocm/rocm-5.2.3/lib -lamdhip64 -o scaled_masked_softmax_cuda.so 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 67 0: ninja: no work to do. 0: >>> done with compiling and loading fused kernels. Compilation time: 31.297 seconds 0: time to initialize megatron (seconds): 89.882 0: [after megatron is initialized] datetime: 2023-03-17 10:54:49 0: building GPT model ... 0: [2023-03-17 10:54:50,019] [INFO] [utils.py:827:see_memory_usage] Before Building Model 0: [2023-03-17 10:54:50,020] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 0: [2023-03-17 10:54:50,020] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.73 GB, percent = 6.1% 0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15, ProcessCoord(pipe=0, data=16, model=0): 16, ProcessCoord(pipe=0, data=17, model=0): 17, ProcessCoord(pipe=0, data=18, model=0): 18, ProcessCoord(pipe=0, data=19, model=0): 19, ProcessCoord(pipe=0, data=20, model=0): 20, ProcessCoord(pipe=0, data=21, model=0): 21, ProcessCoord(pipe=0, data=22, model=0): 22, ProcessCoord(pi 0: pe=0, data=23, model=0): 23, ProcessCoord(pipe=0, data=24, model=0): 24, ProcessCoord(pipe=0, data=25, model=0): 25, ProcessCoord(pipe=0, data=26, model=0): 26, ProcessCoord(pipe=0, data=27, model=0): 27, ProcessCoord(pipe=0, data=28, model=0): 28, ProcessCoord(pipe=0, data=29, model=0): 29, ProcessCoord(pipe=0, data=30, model=0): 30, ProcessCoord(pipe=0, data=31, model=0): 31, ProcessCoord(pipe=0, data=32, model=0): 32, ProcessCoord(pipe=0, data=33, model=0): 33, ProcessCoord(pipe=0, data=34, model=0): 34, ProcessCoord(pipe=0, data=35, model=0): 35, ProcessCoord(pipe=0, data=36, model=0): 36, ProcessCoord(pipe=0, data=37, model=0): 37, ProcessCoord(pipe=0, data=38, model=0): 38, ProcessCoord(pipe=0, data=39, model=0): 39, ProcessCoord(pipe=0, data=40, model=0): 40, ProcessCoord(pipe=0, data=41, model=0): 41, ProcessCoord(pipe=0, data=42, model=0): 42, ProcessCoord(pipe=0, data=43, model=0): 43, ProcessCoord(pipe=0, data=44, model=0): 44, ProcessCoord(pipe=0, data=45, model=0): 45, ProcessCoord(pipe=0, data=4 0: 6, model=0): 46, ProcessCoord(pipe=0, data=47, model=0): 47, ProcessCoord(pipe=0, data=48, model=0): 48, ProcessCoord(pipe=0, data=49, model=0): 49, ProcessCoord(pipe=0, data=50, model=0): 50, ProcessCoord(pipe=0, data=51, model=0): 51, ProcessCoord(pipe=0, data=52, model=0): 52, ProcessCoord(pipe=0, data=53, model=0): 53, ProcessCoord(pipe=0, data=54, model=0): 54, ProcessCoord(pipe=0, data=55, model=0): 55, ProcessCoord(pipe=0, data=56, model=0): 56, ProcessCoord(pipe=0, data=57, model=0): 57, ProcessCoord(pipe=0, data=58, model=0): 58, ProcessCoord(pipe=0, data=59, model=0): 59, ProcessCoord(pipe=0, data=60, model=0): 60, ProcessCoord(pipe=0, data=61, model=0): 61, ProcessCoord(pipe=0, data=62, model=0): 62, ProcessCoord(pipe=0, data=63, model=0): 63} 0: [2023-03-17 10:54:52,032] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer 0: stage=0 layers=22 0: 0: _to_float16 0: 1: EmbeddingPipe 0: 2: 0: 3: ParallelTransformerLayerPipe 0: 4: ParallelTransformerLayerPipe 0: 5: ParallelTransformerLayerPipe 0: 6: ParallelTransformerLayerPipe 0: 7: ParallelTransformerLayerPipe 0: 8: ParallelTransformerLayerPipe 0: 9: ParallelTransformerLayerPipe 0: 10: ParallelTransformerLayerPipe 0: 11: ParallelTransformerLayerPipe 0: 12: ParallelTransformerLayerPipe 0: 13: ParallelTransformerLayerPipe 0: 14: ParallelTransformerLayerPipe 0: 15: ParallelTransformerLayerPipe 0: 16: ParallelTransformerLayerPipe 0: 17: ParallelTransformerLayerPipe 0: 18: undo 0: 19: MixedFusedLayerNorm 0: 20: EmbeddingPipe 0: 21: float16_to_fp32 0: loss: CrossEntropy 0: [2023-03-17 10:54:52,384] [INFO] [utils.py:827:see_memory_usage] After Building Model 0: [2023-03-17 10:54:52,385] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.28 GB CA 0.29 GB Max_CA 0 GB 0: [2023-03-17 10:54:52,385] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.75 GB, percent = 6.1% 0: setting training iterations to 331103 0: > learning rate decay style: cosine 0: DeepSpeed is enabled. 0: [2023-03-17 10:54:52,387] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown 0: [2023-03-17 10:55:05,014] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 0: [2023-03-17 10:55:05,015] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer 0: [2023-03-17 10:55:05,015] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer 0: [2023-03-17 10:55:05,019] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam 0: [2023-03-17 10:55:05,019] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer 0: [2023-03-17 10:55:05,137] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer 0: [2023-03-17 10:55:05,138] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.29 GB CA 0.31 GB Max_CA 0 GB 0: [2023-03-17 10:55:05,138] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.43 GB, percent = 6.2% 7: ninja: no work to do. 7: Time to load utils op: 0.14086437225341797 seconds 0: ninja: no work to do. 7: Time to load utils op: 0.0006330013275146484 seconds 0: Time to load utils op: 0.1462690830230713 seconds 5: Time to load utils op: 0.31119227409362793 seconds 0: Time to load utils op: 0.20420360565185547 seconds 0: Time to load utils op: 0.20270037651062012 seconds 0: Time to load utils op: 0.20326685905456543 seconds 0: Time to load utils op: 0.20317888259887695 seconds 0: Time to load utils op: 0.20302939414978027 seconds 0: Time to load utils op: 0.20342636108398438 seconds 0: Time to load utils op: 0.20308637619018555 seconds 5: Time to load utils op: 0.20448517799377441 seconds 5: Time to load utils op: 0.20463323593139648 seconds 5: Time to load utils op: 0.20434832572937012 seconds 5: Time to load utils op: 0.20419573783874512 seconds 5: Time to load utils op: 0.20494461059570312 seconds 5: Time to load utils op: 0.2049546241760254 seconds 5: Time to load utils op: 0.20467829704284668 seconds 7: Time to load utils op: 0.2041153907775879 secondsTime to load utils op: 0.20476388931274414 seconds 7: 7: Time to load utils op: 0.2043459415435791 seconds 7: Time to load utils op: 0.20490670204162598 seconds 7: Time to load utils op: 0.2039792537689209 seconds 7: Time to load utils op: 0.20406675338745117 seconds 7: Time to load utils op: 0.20496582984924316 seconds 0: Time to load utils op: 0.0006954669952392578 seconds 2: Time to load utils op: 0.2127072811126709 secondsTime to load utils op: 0.2127063274383545 secondsTime to load utils op: 0.2127079963684082 secondsTime to load utils op: 0.21267342567443848 seconds 2: 2: 2: Time to load utils op: 0.21271395683288574 seconds 2: Time to load utils op: 0.21271538734436035 seconds 2: 2: Time to load utils op: 0.21272039413452148 seconds 0: Time to load utils op: 0.0004038810729980469 seconds 2: Time to load utils op: 0.21265912055969238 seconds 0: Time to load utils op: 0.00042724609375 seconds 1: Time to load utils op: 0.21457529067993164 seconds 1: Time to load utils op: 0.21457123756408691 secondsTime to load utils op: 0.21459579467773438 seconds 1: 1: Time to load utils op: 0.21463394165039062 seconds 1: Time to load utils op: 0.2146313190460205 seconds 1: Time to load utils op: 0.2146129608154297 seconds 1: Time to load utils op: 0.21462678909301758 seconds 1: Time to load utils op: 0.2146449089050293 seconds 3: Time to load utils op: 0.21189594268798828 seconds 4: Time to load utils op: 0.21152400970458984 seconds 3: Time to load utils op: 0.21187949180603027 seconds 3: Time to load utils op: 0.21189188957214355 secondsTime to load utils op: 0.21191668510437012 seconds 3: 3: Time to load utils op: 0.21190667152404785 secondsTime to load utils op: 0.21194195747375488 secondsTime to load utils op: 0.21194195747375488 seconds 3: 3: 3: Time to load utils op: 0.2119448184967041 seconds 4: Time to load utils op: 0.21151304244995117 seconds 4: Time to load utils op: 0.21155929565429688 seconds 4: Time to load utils op: 0.21156787872314453 seconds 4: Time to load utils op: 0.21159744262695312 seconds 4: Time to load utils op: 0.21158242225646973 secondsTime to load utils op: 0.21160435676574707 seconds 4: Time to load utils op: 0.21160650253295898 seconds 4: 0: Time to load utils op: 0.0003883838653564453 seconds 0: Time to load utils op: 0.0003848075866699219 seconds 6: Time to load utils op: 0.21043705940246582 secondsTime to load utils op: 0.21042966842651367 seconds 6: 6: Time to load utils op: 0.21045351028442383 seconds 6: Time to load utils op: 0.21047115325927734 seconds 0: Time to load utils op: 0.0003879070281982422 seconds 6: Time to load utils op: 0.21048235893249512 seconds 6: Time to load utils op: 0.21048498153686523 seconds 6: Time to load utils op: 0.2104935646057129 secondsTime to load utils op: 0.21049094200134277 seconds 6: 0: Time to load utils op: 0.0003972053527832031 seconds 7: Time to load utils op: 0.0003342628479003906 seconds 7: Time to load utils op: 0.0003666877746582031 seconds 7: Time to load utils op: 0.00034332275390625 seconds 7: Time to load utils op: 0.00033283233642578125 seconds 7: Time to load utils op: 0.0003294944763183594 seconds 7: Time to load utils op: 0.00031280517578125 seconds 7: Time to load utils op: 0.00036454200744628906 seconds 5: Time to load utils op: 0.0005047321319580078 seconds 5: Time to load utils op: 0.00048804283142089844 seconds 5: Time to load utils op: 0.0005328655242919922 seconds 5: Time to load utils op: 0.0005156993865966797 seconds 5: Time to load utils op: 0.0005125999450683594 seconds 5: Time to load utils op: 0.0005490779876708984 seconds 5: Time to load utils op: 0.0005006790161132812 seconds 5: Time to load utils op: 0.0005238056182861328 seconds 2: Time to load utils op: 0.0007574558258056641 seconds 2: Time to load utils op: 0.0007863044738769531 secondsTime to load utils op: 0.0007793903350830078 seconds 2: 2: Time to load utils op: 0.0008261203765869141 seconds 2: Time to load utils op: 0.0009980201721191406 seconds 2: Time to load utils op: 0.0010082721710205078 seconds 2: Time to load utils op: 0.0009946823120117188 seconds 2: Time to load utils op: 0.0010488033294677734 seconds 3: Time to load utils op: 0.0011768341064453125 seconds 3: Time to load utils op: 0.0012428760528564453 seconds 3: Time to load utils op: 0.001399993896484375 secondsTime to load utils op: 0.0013511180877685547 seconds 3: Time to load utils op: 0.0014500617980957031 seconds 3: 3: Time to load utils op: 0.001348733901977539 seconds 3: Time to load utils op: 0.001443624496459961 seconds 3: Time to load utils op: 0.0013556480407714844 seconds 1: Time to load utils op: 0.0008301734924316406 seconds 1: Time to load utils op: 0.0010559558868408203 seconds 1: Time to load utils op: 0.0010671615600585938 secondsTime to load utils op: 0.0010063648223876953 seconds 1: 1: Time to load utils op: 0.0010607242584228516 seconds 1: Time to load utils op: 0.001028299331665039 seconds 1: Time to load utils op: 0.001043081283569336 seconds 1: Time to load utils op: 0.001087188720703125 seconds 0: [2023-03-17 10:55:05,468] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 0: [2023-03-17 10:55:05,469] [INFO] [utils.py:828:see_memory_usage] MA 0.28 GB Max_MA 0.28 GB CA 0.31 GB Max_CA 0 GB 0: [2023-03-17 10:55:05,469] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 4: Time to load utils op: 0.0006296634674072266 seconds 4: Time to load utils op: 0.0006105899810791016 seconds 4: Time to load utils op: 0.0008127689361572266 secondsTime to load utils op: 0.0008356571197509766 seconds 4: 4: Time to load utils op: 0.0008745193481445312 seconds 4: Time to load utils op: 0.0010645389556884766 seconds 4: Time to load utils op: 0.0010077953338623047 seconds 4: Time to load utils op: 0.0010797977447509766 seconds 6: Time to load utils op: 0.000988006591796875 seconds 6: Time to load utils op: 0.000885009765625 seconds 6: Time to load utils op: 0.0009171962738037109 seconds 6: Time to load utils op: 0.0011649131774902344 seconds 6: Time to load utils op: 0.0011718273162841797 seconds 6: Time to load utils op: 0.0011255741119384766 seconds 6: Time to load utils op: 0.0011513233184814453 seconds 6: Time to load utils op: 0.0012340545654296875 seconds 0: [2023-03-17 10:55:05,589] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 0: [2023-03-17 10:55:05,589] [INFO] [utils.py:828:see_memory_usage] MA 0.62 GB Max_MA 0.62 GB CA 0.82 GB Max_CA 1 GB 0: [2023-03-17 10:55:05,589] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:05,694] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 0: [2023-03-17 10:55:05,695] [INFO] [utils.py:828:see_memory_usage] MA 0.62 GB Max_MA 0.62 GB CA 0.82 GB Max_CA 1 GB 0: [2023-03-17 10:55:05,695] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:05,802] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 0: [2023-03-17 10:55:05,802] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:05,802] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:05,907] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 0: [2023-03-17 10:55:05,908] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:05,908] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:06,015] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 0: [2023-03-17 10:55:06,015] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,015] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:06,120] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer 0: [2023-03-17 10:55:06,120] [INFO] [utils.py:828:see_memory_usage] MA 0.83 GB Max_MA 0.83 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,120] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:06,229] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer 0: [2023-03-17 10:55:06,229] [INFO] [utils.py:828:see_memory_usage] MA 0.85 GB Max_MA 0.85 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,230] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:06,333] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer 0: [2023-03-17 10:55:06,334] [INFO] [utils.py:828:see_memory_usage] MA 0.85 GB Max_MA 0.85 GB CA 1.13 GB Max_CA 1 GB 0: [2023-03-17 10:55:06,334] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.58 GB, percent = 6.3% 0: [2023-03-17 10:55:06,334] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam 0: [2023-03-17 10:55:06,334] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler 0: [2023-03-17 10:55:06,334] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = 0: [2023-03-17 10:55:06,334] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 0: [2023-03-17 10:55:06,334] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] activation_checkpointing_config { 0: "partition_activations": false, 0: "contiguous_memory_optimization": false, 0: "cpu_checkpointing": false, 0: "number_checkpoints": null, 0: "synchronize_checkpoint_boundary": false, 0: "profile": false 0: } 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] amp_enabled .................. False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] amp_params ................... False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] autotuning_config ............ { 0: "enabled": false, 0: "start_step": null, 0: "end_step": null, 0: "metric_path": null, 0: "arg_mappings": null, 0: "metric": "throughput", 0: "model_info": null, 0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", 0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", 0: "overwrite": true, 0: "fast": true, 0: "start_profile_step": 3, 0: "end_profile_step": 5, 0: "tuner_type": "gridsearch", 0: "tuner_early_stopping": 5, 0: "tuner_num_trials": 50, 0: "model_info_path": null, 0: "mp_size": 1, 0: "max_train_batch_size": null, 0: "min_train_batch_size": 1, 0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 0: "min_train_micro_batch_size_per_gpu": 1, 0: "num_tuning_micro_batch_sizes": 3 0: } 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] bfloat16_enabled ............. True 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] comms_config ................. 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] communication_data_type ...... None 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa 0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] curriculum_enabled ........... False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] curriculum_params ............ False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] dataloader_drop_last ......... False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] disable_allgather ............ False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] dump_state ................... False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False 0: [2023-03-17 10:55:06,335] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] elasticity_enabled ........... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] flops_profiler_config ........ { 0: "enabled": false, 0: "profile_step": 1, 0: "module_depth": -1, 0: "top_modules": 1, 0: "detailed": true, 0: "output_file": null 0: } 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] fp16_auto_cast ............... None 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] fp16_enabled ................. False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] global_rank .................. 0 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] load_universal_checkpoint .... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] loss_scale ................... 1.0 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] memory_breakdown ............. False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] monitor_config ............... 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] nebula_config ................ { 0: "enabled": false, 0: "persistent_storage_path": null, 0: "persistent_time_interval": 100, 0: "num_of_version_in_retention": 2, 0: "enable_nebula_load": true, 0: "load_path": null 0: } 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] optimizer_name ............... None 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] optimizer_params ............. None 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] pld_enabled .................. False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] pld_params ................... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] prescale_gradients ........... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] scheduler_name ............... None 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] scheduler_params ............. None 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] sparse_attention ............. None 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] steps_per_print .............. 2000 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] train_batch_size ............. 256 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 4 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] use_node_local_storage ....... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] world_size ................... 64 0: [2023-03-17 10:55:06,336] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False 0: [2023-03-17 10:55:06,337] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False 0: [2023-03-17 10:55:06,337] [INFO] [config.py:1011:print] zero_enabled ................. False 0: [2023-03-17 10:55:06,337] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 0: [2023-03-17 10:55:06,337] [INFO] [config.py:996:print_user_config] json = { 0: "train_micro_batch_size_per_gpu": 4, 0: "train_batch_size": 256, 0: "gradient_clipping": 1.0, 0: "zero_optimization": { 0: "stage": 0 0: }, 0: "bf16": { 0: "enabled": true 0: }, 0: "steps_per_print": 2.000000e+03, 0: "wall_clock_breakdown": false 0: } 0: Time to load utils op: 0.004677534103393555 seconds 0: [2023-03-17 10:55:06,341] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=4 0: [2023-03-17 10:55:06,393] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=22 [0, 22) STAGE_PARAMS=146525952 (146.526M) TOTAL_PARAMS=146525952 (146.526M) UNIQUE_PARAMS=146525952 (146.526M) 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: WARNING: could not find the metadata file checkpoints_146m174b100mdedup 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: will not load any checkpoints and will start from random 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,400] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2023-03-17 10:55:06,401] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_146m174b100mdedup/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: time (ms) | load-checkpoint: 7.40 0: estimated model parameters: 0.146525952 0: estimated model parameters without embeddings: 0.106319616 0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-03-17 10:55:06 0: > building train, validation, and test datasets ... 0: > datasets target sizes (minimum size): 0: train: 84762549 0: validation: 8704 0: test: 256 0: > building train, validation, and test datasets for GPT ... 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.008076 seconds 0: number of documents: 409500 0: > dataset split: 0: train: 0: document indices in [0, 409500) total of 409500 documents 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > last epoch number of samples (28219) is smaller than 95.0% of number of samples per epoch (48281), setting separate_last_epoch to True 0: > elasped time to build and save doc-idx mapping (seconds): 55.056261 0: using: 0: number of documents: 409500 0: number of epochs: 1756 0: sequence length: 2048 0: total number of samples: 84782612 0: > elasped time to build and save sample-idx mapping (seconds): 2.407923 0: > building shuffle index with split [0, 84734330) and [84734330, 84782612) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 4.671883 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_84762549ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_84762549ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_dedup_100M_text_document_train_indexmap_84762549ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.094 seconds 0: total number of samples: 84782613 0: total number of epochs: 1756 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.031761 seconds 0: number of documents: 364608 0: > dataset split: 0: validation: 0: document indices in [0, 364608) total of 364608 documents 0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_8704ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_8704ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_8704ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.079 seconds 0: total number of samples: 84978 0: total number of epochs: 1 0: > finished creating GPT datasets ... 0: [after dataloaders are built] datetime: 2023-03-17 10:56:22 0: done with setup ... 0: training ... 0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: 7: time (ms) | model-and-optimizer-setup: 16972.48 | train/valid/test-data-iterators-setup: 76009.68 0: [000-000] 0.1465B / 0.1063B 0: [before the start of training step] datetime: 2023-03-17 10:56:23 0: [2023-03-17 10:56:23,512] [INFO] [checkpointing.py:553:forward] Activation Checkpointing Information 0: [2023-03-17 10:56:23,512] [INFO] [checkpointing.py:554:forward] ----Partition Activations False, CPU CHECKPOINTING False 0: [2023-03-17 10:56:23,512] [INFO] [checkpointing.py:557:forward] ----contiguous Memory Checkpointing False with None total layers 0: [2023-03-17 10:56:23,512] [INFO] [checkpointing.py:560:forward] ----Synchronization False 0: [2023-03-17 10:56:23,512] [INFO] [checkpointing.py:561:forward] ----Profiling time in checkpointing False 0: [Rank 0] (after 100 iterations) memory (MB) | allocated: 2730.60986328125 | max allocated: 5305.046875 | reserved: 6818.0 | max reserved: 6818.0 7: iteration 100/ 331103 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 0.48 | learning rate: 6.040E-06 | global batch size: 256 | lm loss: 9.852918E+00 | grad norm: 1.684 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 530.656 | TFLOPs: 24.77 | 7: iteration 200/ 331103 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 0.40 | learning rate: 1.208E-05 | global batch size: 256 | lm loss: 8.499832E+00 | grad norm: 1.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 633.349 | TFLOPs: 29.56 | 7: iteration 300/ 331103 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 0.38 | learning rate: 1.812E-05 | global batch size: 256 | lm loss: 7.493411E+00 | grad norm: 0.716 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.732 | TFLOPs: 31.77 | 7: iteration 400/ 331103 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 0.37 | learning rate: 2.416E-05 | global batch size: 256 | lm loss: 6.867657E+00 | grad norm: 0.459 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.159 | TFLOPs: 31.98 | 7: iteration 500/ 331103 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 0.37 | learning rate: 3.020E-05 | global batch size: 256 | lm loss: 6.556413E+00 | grad norm: 0.786 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.049 | TFLOPs: 31.93 | 7: iteration 600/ 331103 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 0.37 | learning rate: 3.624E-05 | global batch size: 256 | lm loss: 6.374343E+00 | grad norm: 0.566 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.513 | TFLOPs: 31.95 | 7: iteration 700/ 331103 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 0.37 | learning rate: 4.228E-05 | global batch size: 256 | lm loss: 6.241594E+00 | grad norm: 0.641 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.298 | TFLOPs: 31.89 | 7: iteration 800/ 331103 | consumed samples: 204800 | consumed tokens: 419430400 | elapsed time per iteration (s): 0.38 | learning rate: 4.832E-05 | global batch size: 256 | lm loss: 6.127764E+00 | grad norm: 1.011 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.493 | TFLOPs: 31.86 | 7: iteration 900/ 331103 | consumed samples: 230400 | consumed tokens: 471859200 | elapsed time per iteration (s): 0.37 | learning rate: 5.436E-05 | global batch size: 256 | lm loss: 6.015020E+00 | grad norm: 1.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.702 | TFLOPs: 31.96 | 7: iteration 1000/ 331103 | consumed samples: 256000 | consumed tokens: 524288000 | elapsed time per iteration (s): 0.38 | learning rate: 6.040E-05 | global batch size: 256 | lm loss: 5.899557E+00 | grad norm: 0.957 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 676.193 | TFLOPs: 31.56 | 7: iteration 1100/ 331103 | consumed samples: 281600 | consumed tokens: 576716800 | elapsed time per iteration (s): 0.38 | learning rate: 6.644E-05 | global batch size: 256 | lm loss: 5.787103E+00 | grad norm: 1.496 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.968 | TFLOPs: 31.74 | 7: iteration 1200/ 331103 | consumed samples: 307200 | consumed tokens: 629145600 | elapsed time per iteration (s): 0.38 | learning rate: 7.248E-05 | global batch size: 256 | lm loss: 5.685525E+00 | grad norm: 1.062 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.566 | TFLOPs: 31.86 | 7: iteration 1300/ 331103 | consumed samples: 332800 | consumed tokens: 681574400 | elapsed time per iteration (s): 0.38 | learning rate: 7.853E-05 | global batch size: 256 | lm loss: 5.576919E+00 | grad norm: 1.032 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.934 | TFLOPs: 31.69 | 7: iteration 1400/ 331103 | consumed samples: 358400 | consumed tokens: 734003200 | elapsed time per iteration (s): 0.39 | learning rate: 8.457E-05 | global batch size: 256 | lm loss: 5.476093E+00 | grad norm: 1.106 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 657.976 | TFLOPs: 30.71 | 7: iteration 1500/ 331103 | consumed samples: 384000 | consumed tokens: 786432000 | elapsed time per iteration (s): 0.38 | learning rate: 9.061E-05 | global batch size: 256 | lm loss: 5.378074E+00 | grad norm: 1.190 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 665.280 | TFLOPs: 31.05 | 7: iteration 1600/ 331103 | consumed samples: 409600 | consumed tokens: 838860800 | elapsed time per iteration (s): 0.38 | learning rate: 9.665E-05 | global batch size: 256 | lm loss: 5.284706E+00 | grad norm: 0.982 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.561 | TFLOPs: 31.72 | 7: iteration 1700/ 331103 | consumed samples: 435200 | consumed tokens: 891289600 | elapsed time per iteration (s): 0.37 | learning rate: 1.027E-04 | global batch size: 256 | lm loss: 5.199332E+00 | grad norm: 0.912 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.837 | TFLOPs: 31.92 | 7: iteration 1800/ 331103 | consumed samples: 460800 | consumed tokens: 943718400 | elapsed time per iteration (s): 0.37 | learning rate: 1.087E-04 | global batch size: 256 | lm loss: 5.119381E+00 | grad norm: 0.735 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.723 | TFLOPs: 31.96 | 7: iteration 1900/ 331103 | consumed samples: 486400 | consumed tokens: 996147200 | elapsed time per iteration (s): 0.38 | learning rate: 1.148E-04 | global batch size: 256 | lm loss: 5.041324E+00 | grad norm: 0.559 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.969 | TFLOPs: 31.74 | 0: [2023-03-17 11:09:09,476] [INFO] [logging.py:68:log_dist] [Rank 0] step=2000, skipped=0, lr=[0.00012080814039227253, 0.00012080814039227253, 0.00012080814039227253], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 2000/ 331103 | consumed samples: 512000 | consumed tokens: 1048576000 | elapsed time per iteration (s): 0.37 | learning rate: 1.208E-04 | global batch size: 256 | lm loss: 4.966404E+00 | grad norm: 0.790 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.194 | TFLOPs: 31.94 | 0: steps: 2000 loss: 4.9225 iter time (s): 0.381 samples/sec: 671.561 7: iteration 2100/ 331103 | consumed samples: 537600 | consumed tokens: 1101004800 | elapsed time per iteration (s): 0.37 | learning rate: 1.268E-04 | global batch size: 256 | lm loss: 4.895007E+00 | grad norm: 0.863 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.264 | TFLOPs: 31.99 | 7: iteration 2200/ 331103 | consumed samples: 563200 | consumed tokens: 1153433600 | elapsed time per iteration (s): 0.37 | learning rate: 1.329E-04 | global batch size: 256 | lm loss: 4.829178E+00 | grad norm: 0.849 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.360 | TFLOPs: 31.90 | 7: iteration 2300/ 331103 | consumed samples: 588800 | consumed tokens: 1205862400 | elapsed time per iteration (s): 0.38 | learning rate: 1.389E-04 | global batch size: 256 | lm loss: 4.772610E+00 | grad norm: 0.743 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.963 | TFLOPs: 31.83 | 7: iteration 2400/ 331103 | consumed samples: 614400 | consumed tokens: 1258291200 | elapsed time per iteration (s): 0.38 | learning rate: 1.450E-04 | global batch size: 256 | lm loss: 4.719912E+00 | grad norm: 0.687 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.610 | TFLOPs: 31.82 | 7: iteration 2500/ 331103 | consumed samples: 640000 | consumed tokens: 1310720000 | elapsed time per iteration (s): 0.38 | learning rate: 1.510E-04 | global batch size: 256 | lm loss: 4.679331E+00 | grad norm: 0.567 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.487 | TFLOPs: 31.62 | 7: iteration 2600/ 331103 | consumed samples: 665600 | consumed tokens: 1363148800 | elapsed time per iteration (s): 0.37 | learning rate: 1.571E-04 | global batch size: 256 | lm loss: 4.634508E+00 | grad norm: 0.538 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.804 | TFLOPs: 31.87 | 7: iteration 2700/ 331103 | consumed samples: 691200 | consumed tokens: 1415577600 | elapsed time per iteration (s): 0.38 | learning rate: 1.631E-04 | global batch size: 256 | lm loss: 4.595750E+00 | grad norm: 0.839 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.658 | TFLOPs: 31.86 | 7: iteration 2800/ 331103 | consumed samples: 716800 | consumed tokens: 1468006400 | elapsed time per iteration (s): 0.38 | learning rate: 1.691E-04 | global batch size: 256 | lm loss: 4.560840E+00 | grad norm: 0.538 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 676.072 | TFLOPs: 31.56 | 7: iteration 2900/ 331103 | consumed samples: 742400 | consumed tokens: 1520435200 | elapsed time per iteration (s): 0.38 | learning rate: 1.752E-04 | global batch size: 256 | lm loss: 4.529776E+00 | grad norm: 0.757 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.968 | TFLOPs: 31.79 | 7: iteration 3000/ 331103 | consumed samples: 768000 | consumed tokens: 1572864000 | elapsed time per iteration (s): 0.38 | learning rate: 1.812E-04 | global batch size: 256 | lm loss: 4.494099E+00 | grad norm: 0.589 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.151 | TFLOPs: 31.61 | 7: iteration 3100/ 331103 | consumed samples: 793600 | consumed tokens: 1625292800 | elapsed time per iteration (s): 0.38 | learning rate: 1.873E-04 | global batch size: 256 | lm loss: 4.465293E+00 | grad norm: 0.461 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.464 | TFLOPs: 31.81 | 7: iteration 3200/ 331103 | consumed samples: 819200 | consumed tokens: 1677721600 | elapsed time per iteration (s): 0.37 | learning rate: 1.933E-04 | global batch size: 256 | lm loss: 4.438477E+00 | grad norm: 0.525 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.409 | TFLOPs: 31.95 | 7: iteration 3300/ 331103 | consumed samples: 844800 | consumed tokens: 1730150400 | elapsed time per iteration (s): 0.38 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 4.415167E+00 | grad norm: 0.478 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.881 | TFLOPs: 31.73 | 7: iteration 3400/ 331103 | consumed samples: 870400 | consumed tokens: 1782579200 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.388732E+00 | grad norm: 0.522 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.723 | TFLOPs: 31.87 | 7: iteration 3500/ 331103 | consumed samples: 896000 | consumed tokens: 1835008000 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.358397E+00 | grad norm: 0.540 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.886 | TFLOPs: 31.83 | 7: iteration 3600/ 331103 | consumed samples: 921600 | consumed tokens: 1887436800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.340852E+00 | grad norm: 0.525 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 670.279 | TFLOPs: 31.29 | 7: iteration 3700/ 331103 | consumed samples: 947200 | consumed tokens: 1939865600 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.318645E+00 | grad norm: 0.441 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.243 | TFLOPs: 31.61 | 7: iteration 3800/ 331103 | consumed samples: 972800 | consumed tokens: 1992294400 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.296981E+00 | grad norm: 0.518 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.823 | TFLOPs: 31.73 | 7: iteration 3900/ 331103 | consumed samples: 998400 | consumed tokens: 2044723200 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.278233E+00 | grad norm: 0.428 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.198 | TFLOPs: 31.89 | 0: [2023-03-17 11:21:41,719] [INFO] [logging.py:68:log_dist] [Rank 0] step=4000, skipped=0, lr=[0.00019999803796692803, 0.00019999803796692803, 0.00019999803796692803], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 4000/ 331103 | consumed samples: 1024000 | consumed tokens: 2097152000 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.257326E+00 | grad norm: 0.396 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.559 | TFLOPs: 31.86 | 0: steps: 4000 loss: 4.2425 iter time (s): 0.374 samples/sec: 684.477 7: iteration 4100/ 331103 | consumed samples: 1049600 | consumed tokens: 2149580800 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.241578E+00 | grad norm: 0.395 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.705 | TFLOPs: 32.05 | 7: iteration 4200/ 331103 | consumed samples: 1075200 | consumed tokens: 2202009600 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.225186E+00 | grad norm: 0.341 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.777 | TFLOPs: 32.01 | 7: iteration 4300/ 331103 | consumed samples: 1100800 | consumed tokens: 2254438400 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.210670E+00 | grad norm: 0.500 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.733 | TFLOPs: 31.91 | 7: iteration 4400/ 331103 | consumed samples: 1126400 | consumed tokens: 2306867200 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.194637E+00 | grad norm: 0.350 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.527 | TFLOPs: 31.86 | 7: iteration 4500/ 331103 | consumed samples: 1152000 | consumed tokens: 2359296000 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.180992E+00 | grad norm: 0.339 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.888 | TFLOPs: 32.01 | 7: iteration 4600/ 331103 | consumed samples: 1177600 | consumed tokens: 2411724800 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.166902E+00 | grad norm: 0.403 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.132 | TFLOPs: 31.98 | 7: iteration 4700/ 331103 | consumed samples: 1203200 | consumed tokens: 2464153600 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.150127E+00 | grad norm: 0.398 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.928 | TFLOPs: 31.88 | 7: iteration 4800/ 331103 | consumed samples: 1228800 | consumed tokens: 2516582400 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.202857E+00 | grad norm: 0.303 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.726 | TFLOPs: 31.73 | 7: iteration 4900/ 331103 | consumed samples: 1254400 | consumed tokens: 2569011200 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.133224E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.560 | TFLOPs: 31.77 | 7: iteration 5000/ 331103 | consumed samples: 1280000 | consumed tokens: 2621440000 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.117512E+00 | grad norm: 0.333 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.029 | TFLOPs: 31.88 | 7: iteration 5100/ 331103 | consumed samples: 1305600 | consumed tokens: 2673868800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.104160E+00 | grad norm: 0.344 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.965 | TFLOPs: 31.83 | 7: iteration 5200/ 331103 | consumed samples: 1331200 | consumed tokens: 2726297600 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.091399E+00 | grad norm: 0.386 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.020 | TFLOPs: 31.88 | 7: iteration 5300/ 331103 | consumed samples: 1356800 | consumed tokens: 2778726400 | elapsed time per iteration (s): 0.39 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.083453E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 662.554 | TFLOPs: 30.93 | 7: iteration 5400/ 331103 | consumed samples: 1382400 | consumed tokens: 2831155200 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.068403E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.219 | TFLOPs: 31.70 | 7: iteration 5500/ 331103 | consumed samples: 1408000 | consumed tokens: 2883584000 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.061406E+00 | grad norm: 0.353 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.036 | TFLOPs: 31.97 | 7: iteration 5600/ 331103 | consumed samples: 1433600 | consumed tokens: 2936012800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.050631E+00 | grad norm: 0.416 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.561 | TFLOPs: 31.81 | 7: iteration 5700/ 331103 | consumed samples: 1459200 | consumed tokens: 2988441600 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.040457E+00 | grad norm: 0.335 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 676.291 | TFLOPs: 31.57 | 7: iteration 5800/ 331103 | consumed samples: 1484800 | consumed tokens: 3040870400 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.029711E+00 | grad norm: 0.361 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.019 | TFLOPs: 31.60 | 7: iteration 5900/ 331103 | consumed samples: 1510400 | consumed tokens: 3093299200 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.023485E+00 | grad norm: 0.332 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.854 | TFLOPs: 32.01 | 0: [2023-03-17 11:34:12,934] [INFO] [logging.py:68:log_dist] [Rank 0] step=6000, skipped=0, lr=[0.0001999701145368867, 0.0001999701145368867, 0.0001999701145368867], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 6000/ 331103 | consumed samples: 1536000 | consumed tokens: 3145728000 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.012180E+00 | grad norm: 0.385 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.724 | TFLOPs: 31.91 | 0: steps: 6000 loss: 4.0358 iter time (s): 0.373 samples/sec: 685.476 7: iteration 6100/ 331103 | consumed samples: 1561600 | consumed tokens: 3198156800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 4.005279E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.620 | TFLOPs: 31.72 | 7: iteration 6200/ 331103 | consumed samples: 1587200 | consumed tokens: 3250585600 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 3.996388E+00 | grad norm: 0.347 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.269 | TFLOPs: 31.85 | 7: iteration 6300/ 331103 | consumed samples: 1612800 | consumed tokens: 3303014400 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 3.985336E+00 | grad norm: 0.382 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.221 | TFLOPs: 31.89 | 7: iteration 6400/ 331103 | consumed samples: 1638400 | consumed tokens: 3355443200 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 3.980588E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.396 | TFLOPs: 31.90 | 7: iteration 6500/ 331103 | consumed samples: 1664000 | consumed tokens: 3407872000 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 3.970438E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.708 | TFLOPs: 31.82 | 7: iteration 6600/ 331103 | consumed samples: 1689600 | consumed tokens: 3460300800 | elapsed time per iteration (s): 0.38 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 3.962166E+00 | grad norm: 0.341 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.617 | TFLOPs: 31.82 | 7: iteration 6700/ 331103 | consumed samples: 1715200 | consumed tokens: 3512729600 | elapsed time per iteration (s): 0.37 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 3.957177E+00 | grad norm: 0.366 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.688 | TFLOPs: 31.91 | 7: iteration 6800/ 331103 | consumed samples: 1740800 | consumed tokens: 3565158400 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.946317E+00 | grad norm: 0.362 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.940 | TFLOPs: 31.92 | 7: iteration 6900/ 331103 | consumed samples: 1766400 | consumed tokens: 3617587200 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.936903E+00 | grad norm: 0.505 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.431 | TFLOPs: 31.85 | 7: iteration 7000/ 331103 | consumed samples: 1792000 | consumed tokens: 3670016000 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.934796E+00 | grad norm: 0.318 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.285 | TFLOPs: 31.80 | 7: iteration 7100/ 331103 | consumed samples: 1817600 | consumed tokens: 3722444800 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.927296E+00 | grad norm: 0.359 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.610 | TFLOPs: 31.77 | 7: iteration 7200/ 331103 | consumed samples: 1843200 | consumed tokens: 3774873600 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.918745E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.527 | TFLOPs: 31.90 | 7: iteration 7300/ 331103 | consumed samples: 1868800 | consumed tokens: 3827302400 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.911469E+00 | grad norm: 0.349 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.467 | TFLOPs: 31.81 | 7: iteration 7400/ 331103 | consumed samples: 1894400 | consumed tokens: 3879731200 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.906372E+00 | grad norm: 0.337 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.650 | TFLOPs: 31.86 | 7: iteration 7500/ 331103 | consumed samples: 1920000 | consumed tokens: 3932160000 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.902337E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.321 | TFLOPs: 31.89 | 7: iteration 7600/ 331103 | consumed samples: 1945600 | consumed tokens: 3984588800 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.895023E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.076 | TFLOPs: 31.88 | 7: iteration 7700/ 331103 | consumed samples: 1971200 | consumed tokens: 4037017600 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.889802E+00 | grad norm: 0.415 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.703 | TFLOPs: 31.40 | 7: iteration 7800/ 331103 | consumed samples: 1996800 | consumed tokens: 4089446400 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.883795E+00 | grad norm: 0.318 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.204 | TFLOPs: 31.94 | 7: iteration 7900/ 331103 | consumed samples: 2022400 | consumed tokens: 4141875200 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.876316E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.223 | TFLOPs: 31.89 | 0: [2023-03-17 11:46:43,556] [INFO] [logging.py:68:log_dist] [Rank 0] step=8000, skipped=0, lr=[0.000199909135416451, 0.000199909135416451, 0.000199909135416451], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 8000/ 331103 | consumed samples: 2048000 | consumed tokens: 4194304000 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.869672E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.097 | TFLOPs: 31.93 | 0: steps: 8000 loss: 3.8696 iter time (s): 0.373 samples/sec: 685.654 7: iteration 8100/ 331103 | consumed samples: 2073600 | consumed tokens: 4246732800 | elapsed time per iteration (s): 0.40 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.865273E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 632.358 | TFLOPs: 29.52 | 7: iteration 8200/ 331103 | consumed samples: 2099200 | consumed tokens: 4299161600 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.861646E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 688.156 | TFLOPs: 32.12 | 7: iteration 8300/ 331103 | consumed samples: 2124800 | consumed tokens: 4351590400 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.854921E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.796 | TFLOPs: 32.01 | 7: iteration 8400/ 331103 | consumed samples: 2150400 | consumed tokens: 4404019200 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.850058E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.059 | TFLOPs: 31.93 | 7: iteration 8500/ 331103 | consumed samples: 2176000 | consumed tokens: 4456448000 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.840331E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.481 | TFLOPs: 31.90 | 7: iteration 8600/ 331103 | consumed samples: 2201600 | consumed tokens: 4508876800 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.837950E+00 | grad norm: 0.304 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.841 | TFLOPs: 31.97 | 7: iteration 8700/ 331103 | consumed samples: 2227200 | consumed tokens: 4561305600 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.834997E+00 | grad norm: 0.254 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.127 | TFLOPs: 31.65 | 7: iteration 8800/ 331103 | consumed samples: 2252800 | consumed tokens: 4613734400 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.829685E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.431 | TFLOPs: 31.76 | 7: iteration 8900/ 331103 | consumed samples: 2278400 | consumed tokens: 4666163200 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.824074E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.565 | TFLOPs: 31.91 | 7: iteration 9000/ 331103 | consumed samples: 2304000 | consumed tokens: 4718592000 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.819200E+00 | grad norm: 0.264 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.562 | TFLOPs: 31.72 | 7: iteration 9100/ 331103 | consumed samples: 2329600 | consumed tokens: 4771020800 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.813169E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.383 | TFLOPs: 31.90 | 7: iteration 9200/ 331103 | consumed samples: 2355200 | consumed tokens: 4823449600 | elapsed time per iteration (s): 0.38 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.807544E+00 | grad norm: 0.315 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.231 | TFLOPs: 31.75 | 7: iteration 9300/ 331103 | consumed samples: 2380800 | consumed tokens: 4875878400 | elapsed time per iteration (s): 0.37 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 3.802642E+00 | grad norm: 0.320 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.121 | TFLOPs: 31.89 | 7: iteration 9400/ 331103 | consumed samples: 2406400 | consumed tokens: 4928307200 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.797152E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.149 | TFLOPs: 31.75 | 7: iteration 9500/ 331103 | consumed samples: 2432000 | consumed tokens: 4980736000 | elapsed time per iteration (s): 0.37 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.795085E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.122 | TFLOPs: 31.89 | 7: iteration 9600/ 331103 | consumed samples: 2457600 | consumed tokens: 5033164800 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.789136E+00 | grad norm: 0.304 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.671 | TFLOPs: 31.72 | 7: iteration 9700/ 331103 | consumed samples: 2483200 | consumed tokens: 5085593600 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.787000E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.510 | TFLOPs: 31.86 | 7: iteration 9800/ 331103 | consumed samples: 2508800 | consumed tokens: 5138022400 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.782720E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 672.207 | TFLOPs: 31.38 | 7: iteration 9900/ 331103 | consumed samples: 2534400 | consumed tokens: 5190451200 | elapsed time per iteration (s): 0.37 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.777397E+00 | grad norm: 0.362 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.805 | TFLOPs: 31.96 | 0: [2023-03-17 11:59:17,333] [INFO] [logging.py:68:log_dist] [Rank 0] step=10000, skipped=0, lr=[0.00019981512300980405, 0.00019981512300980405, 0.00019981512300980405], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 10000/ 331103 | consumed samples: 2560000 | consumed tokens: 5242880000 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.776499E+00 | grad norm: 0.326 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.520 | TFLOPs: 31.72 | 0: steps: 10000 loss: 3.7821 iter time (s): 0.375 samples/sec: 682.903 7: ------------------------------------------------------------------------------------------------ 7: validation loss at iteration 10000 | lm loss value: 3.924540E+00 | lm loss PPL: 5.062977E+01 | 7: ------------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 10000 to checkpoints_146m174b100mdedup 0: [2023-03-17 11:59:17,496] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step10000 is begin to save! 0: [2023-03-17 11:59:18,134] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_01-model_00-model_states.pt... 0: [2023-03-17 11:59:18,234] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_01-model_00-model_states.pt. 0: [2023-03-17 11:59:18,234] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_03-model_00-model_states.pt... 0: [2023-03-17 11:59:18,251] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_03-model_00-model_states.pt. 0: [2023-03-17 11:59:18,252] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_04-model_00-model_states.pt... 0: [2023-03-17 11:59:18,268] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_04-model_00-model_states.pt. 0: [2023-03-17 11:59:18,268] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_05-model_00-model_states.pt... 0: [2023-03-17 11:59:18,285] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_05-model_00-model_states.pt. 0: [2023-03-17 11:59:18,285] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_06-model_00-model_states.pt... 0: [2023-03-17 11:59:18,301] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_06-model_00-model_states.pt. 0: [2023-03-17 11:59:18,301] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_07-model_00-model_states.pt... 0: [2023-03-17 11:59:18,316] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_07-model_00-model_states.pt. 0: [2023-03-17 11:59:18,317] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_08-model_00-model_states.pt... 0: [2023-03-17 11:59:18,332] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_08-model_00-model_states.pt. 0: [2023-03-17 11:59:18,332] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_09-model_00-model_states.pt... 0: [2023-03-17 11:59:18,348] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_09-model_00-model_states.pt. 0: [2023-03-17 11:59:18,348] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_10-model_00-model_states.pt... 0: [2023-03-17 11:59:18,364] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_10-model_00-model_states.pt. 0: [2023-03-17 11:59:18,364] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_11-model_00-model_states.pt... 0: [2023-03-17 11:59:18,380] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_11-model_00-model_states.pt. 0: [2023-03-17 11:59:18,380] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_12-model_00-model_states.pt... 0: [2023-03-17 11:59:18,396] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_12-model_00-model_states.pt. 0: [2023-03-17 11:59:18,396] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_13-model_00-model_states.pt... 0: [2023-03-17 11:59:18,412] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_13-model_00-model_states.pt. 0: [2023-03-17 11:59:18,412] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_14-model_00-model_states.pt... 0: [2023-03-17 11:59:18,428] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_14-model_00-model_states.pt. 0: [2023-03-17 11:59:18,428] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_15-model_00-model_states.pt... 0: [2023-03-17 11:59:18,443] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_15-model_00-model_states.pt. 0: [2023-03-17 11:59:18,444] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_16-model_00-model_states.pt... 0: [2023-03-17 11:59:18,459] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_16-model_00-model_states.pt. 0: [2023-03-17 11:59:18,460] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_17-model_00-model_states.pt... 0: [2023-03-17 11:59:18,475] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_17-model_00-model_states.pt. 0: [2023-03-17 11:59:18,476] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/layer_19-model_00-model_states.pt... 0: [2023-03-17 11:59:18,477] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/layer_19-model_00-model_states.pt. 0: [2023-03-17 11:59:18,478] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_146m174b100mdedup/global_step10000/mp_rank_00_model_states.pt 0: [2023-03-17 11:59:18,478] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/mp_rank_00_model_states.pt... 0: [2023-03-17 11:59:18,481] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/mp_rank_00_model_states.pt. 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 4: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 2: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 3: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 5: [2023-03-17 11:59:18,499] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 0: [2023-03-17 11:59:18,532] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,533] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,533] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,533] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,534] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,534] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,534] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,534] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,535] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,535] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,535] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,535] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,535] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,541] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,542] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,542] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,542] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,542] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-17 11:59:18,542] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,542] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,542] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,542] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,543] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,543] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,543] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,543] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2023-03-17 11:59:18,544] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2023-03-17 11:59:18,544] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 3: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,545] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,546] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,546] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,546] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,546] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,546] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,546] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,547] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,547] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,550] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2023-03-17 11:59:18,550] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,550] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,550] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 7: [2023-03-17 11:59:18,550] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 6: [2023-03-17 11:59:18,557] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2023-03-17 11:59:18,557] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2023-03-17 11:59:18,557] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,561] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,561] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,561] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,561] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,561] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 2: [2023-03-17 11:59:18,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2023-03-17 11:59:18,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,563] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-17 11:59:18,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,564] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,564] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,564] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 1: [2023-03-17 11:59:18,563] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2023-03-17 11:59:18,564] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: [2023-03-17 11:59:18,572] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2023-03-17 11:59:18,572] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,572] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,572] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,572] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,572] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,573] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,573] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,574] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,574] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,574] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,574] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,574] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,574] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 5: [2023-03-17 11:59:18,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2023-03-17 11:59:18,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step10000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2023-03-17 11:59:18,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step10000 is ready now! 0: successfully saved checkpoint at iteration 10000 to checkpoints_146m174b100mdedup 7: time (ms) | save-checkpoint: 1126.32 7: iteration 10100/ 331103 | consumed samples: 2585600 | consumed tokens: 5295308800 | elapsed time per iteration (s): 0.39 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.771752E+00 | grad norm: 0.344 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 654.991 | TFLOPs: 30.57 | 7: iteration 10200/ 331103 | consumed samples: 2611200 | consumed tokens: 5347737600 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.768426E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.959 | TFLOPs: 31.46 | 7: iteration 10300/ 331103 | consumed samples: 2636800 | consumed tokens: 5400166400 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.763525E+00 | grad norm: 0.371 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 668.444 | TFLOPs: 31.20 | 7: iteration 10400/ 331103 | consumed samples: 2662400 | consumed tokens: 5452595200 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.760586E+00 | grad norm: 0.338 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.677 | TFLOPs: 31.44 | 7: iteration 10500/ 331103 | consumed samples: 2688000 | consumed tokens: 5505024000 | elapsed time per iteration (s): 0.40 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.754074E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 637.945 | TFLOPs: 29.78 | 7: iteration 10600/ 331103 | consumed samples: 2713600 | consumed tokens: 5557452800 | elapsed time per iteration (s): 0.37 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.753015E+00 | grad norm: 0.322 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.473 | TFLOPs: 31.95 | 7: iteration 10700/ 331103 | consumed samples: 2739200 | consumed tokens: 5609881600 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.746226E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 673.814 | TFLOPs: 31.45 | 7: iteration 10800/ 331103 | consumed samples: 2764800 | consumed tokens: 5662310400 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.743896E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.279 | TFLOPs: 31.66 | 7: iteration 10900/ 331103 | consumed samples: 2790400 | consumed tokens: 5714739200 | elapsed time per iteration (s): 0.37 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.741812E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.052 | TFLOPs: 31.93 | 7: iteration 11000/ 331103 | consumed samples: 2816000 | consumed tokens: 5767168000 | elapsed time per iteration (s): 0.38 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 3.737476E+00 | grad norm: 0.264 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.893 | TFLOPs: 31.78 | 7: iteration 11100/ 331103 | consumed samples: 2841600 | consumed tokens: 5819596800 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.734030E+00 | grad norm: 0.339 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.065 | TFLOPs: 31.79 | 7: iteration 11200/ 331103 | consumed samples: 2867200 | consumed tokens: 5872025600 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.731331E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.062 | TFLOPs: 31.79 | 7: iteration 11300/ 331103 | consumed samples: 2892800 | consumed tokens: 5924454400 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.728171E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.142 | TFLOPs: 31.65 | 7: iteration 11400/ 331103 | consumed samples: 2918400 | consumed tokens: 5976883200 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.723509E+00 | grad norm: 0.332 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.973 | TFLOPs: 31.88 | 7: iteration 11500/ 331103 | consumed samples: 2944000 | consumed tokens: 6029312000 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.720793E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.130 | TFLOPs: 31.93 | 7: iteration 11600/ 331103 | consumed samples: 2969600 | consumed tokens: 6081740800 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.717907E+00 | grad norm: 0.304 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.363 | TFLOPs: 31.94 | 7: iteration 11700/ 331103 | consumed samples: 2995200 | consumed tokens: 6134169600 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.713470E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.969 | TFLOPs: 31.93 | 7: iteration 11800/ 331103 | consumed samples: 3020800 | consumed tokens: 6186598400 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.710511E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.421 | TFLOPs: 31.90 | 7: iteration 11900/ 331103 | consumed samples: 3046400 | consumed tokens: 6239027200 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.709373E+00 | grad norm: 0.335 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.088 | TFLOPs: 31.79 | 0: [2023-03-17 12:11:54,190] [INFO] [logging.py:68:log_dist] [Rank 0] step=12000, skipped=0, lr=[0.00019968811185780457, 0.00019968811185780457, 0.00019968811185780457], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 12000/ 331103 | consumed samples: 3072000 | consumed tokens: 6291456000 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.707157E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.820 | TFLOPs: 31.87 | 0: steps: 12000 loss: 3.6957 iter time (s): 0.376 samples/sec: 681.351 7: iteration 12100/ 331103 | consumed samples: 3097600 | consumed tokens: 6343884800 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.702428E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.190 | TFLOPs: 31.66 | 7: iteration 12200/ 331103 | consumed samples: 3123200 | consumed tokens: 6396313600 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.700295E+00 | grad norm: 0.333 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.165 | TFLOPs: 31.84 | 7: iteration 12300/ 331103 | consumed samples: 3148800 | consumed tokens: 6448742400 | elapsed time per iteration (s): 0.38 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.698892E+00 | grad norm: 0.322 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.761 | TFLOPs: 31.78 | 7: iteration 12400/ 331103 | consumed samples: 3174400 | consumed tokens: 6501171200 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.693908E+00 | grad norm: 0.349 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.061 | TFLOPs: 31.88 | 7: iteration 12500/ 331103 | consumed samples: 3200000 | consumed tokens: 6553600000 | elapsed time per iteration (s): 0.37 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 3.695586E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.202 | TFLOPs: 31.94 | 7: iteration 12600/ 331103 | consumed samples: 3225600 | consumed tokens: 6606028800 | elapsed time per iteration (s): 0.37 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.686829E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.073 | TFLOPs: 31.98 | 7: iteration 12700/ 331103 | consumed samples: 3251200 | consumed tokens: 6658457600 | elapsed time per iteration (s): 0.37 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.689872E+00 | grad norm: 0.316 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.933 | TFLOPs: 31.92 | 7: iteration 12800/ 331103 | consumed samples: 3276800 | consumed tokens: 6710886400 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.685630E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.771 | TFLOPs: 31.82 | 7: iteration 12900/ 331103 | consumed samples: 3302400 | consumed tokens: 6763315200 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.682108E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.018 | TFLOPs: 31.79 | 7: iteration 13000/ 331103 | consumed samples: 3328000 | consumed tokens: 6815744000 | elapsed time per iteration (s): 0.37 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.677119E+00 | grad norm: 0.300 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.520 | TFLOPs: 31.90 | 7: iteration 13100/ 331103 | consumed samples: 3353600 | consumed tokens: 6868172800 | elapsed time per iteration (s): 0.37 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.673554E+00 | grad norm: 0.311 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.124 | TFLOPs: 31.93 | 7: iteration 13200/ 331103 | consumed samples: 3379200 | consumed tokens: 6920601600 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.673994E+00 | grad norm: 0.319 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.488 | TFLOPs: 31.86 | 7: iteration 13300/ 331103 | consumed samples: 3404800 | consumed tokens: 6973030400 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.670280E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.140 | TFLOPs: 31.84 | 7: iteration 13400/ 331103 | consumed samples: 3430400 | consumed tokens: 7025459200 | elapsed time per iteration (s): 0.37 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.667302E+00 | grad norm: 0.344 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.533 | TFLOPs: 31.90 | 7: iteration 13500/ 331103 | consumed samples: 3456000 | consumed tokens: 7077888000 | elapsed time per iteration (s): 0.37 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.667403E+00 | grad norm: 0.317 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.384 | TFLOPs: 31.90 | 7: iteration 13600/ 331103 | consumed samples: 3481600 | consumed tokens: 7130316800 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.661253E+00 | grad norm: 0.333 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.300 | TFLOPs: 31.66 | 7: iteration 13700/ 331103 | consumed samples: 3507200 | consumed tokens: 7182745600 | elapsed time per iteration (s): 0.38 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 3.661651E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.607 | TFLOPs: 31.86 | 7: iteration 13800/ 331103 | consumed samples: 3532800 | consumed tokens: 7235174400 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.660845E+00 | grad norm: 0.326 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.462 | TFLOPs: 31.90 | 7: iteration 13900/ 331103 | consumed samples: 3558400 | consumed tokens: 7287603200 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.656191E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.804 | TFLOPs: 31.64 | 0: [2023-03-17 12:24:24,423] [INFO] [logging.py:68:log_dist] [Rank 0] step=14000, skipped=0, lr=[0.00019952814862529602, 0.00019952814862529602, 0.00019952814862529602], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 14000/ 331103 | consumed samples: 3584000 | consumed tokens: 7340032000 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.653625E+00 | grad norm: 0.301 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 687.690 | TFLOPs: 32.10 | 0: steps: 14000 loss: 3.6451 iter time (s): 0.373 samples/sec: 686.386 7: iteration 14100/ 331103 | consumed samples: 3609600 | consumed tokens: 7392460800 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.649132E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.561 | TFLOPs: 31.95 | 7: iteration 14200/ 331103 | consumed samples: 3635200 | consumed tokens: 7444889600 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.650771E+00 | grad norm: 0.316 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.190 | TFLOPs: 32.03 | 7: iteration 14300/ 331103 | consumed samples: 3660800 | consumed tokens: 7497318400 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.648053E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.913 | TFLOPs: 32.06 | 7: iteration 14400/ 331103 | consumed samples: 3686400 | consumed tokens: 7549747200 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.645648E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.067 | TFLOPs: 31.98 | 7: iteration 14500/ 331103 | consumed samples: 3712000 | consumed tokens: 7602176000 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.642733E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.227 | TFLOPs: 31.84 | 7: iteration 14600/ 331103 | consumed samples: 3737600 | consumed tokens: 7654604800 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.640566E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.050 | TFLOPs: 31.84 | 7: iteration 14700/ 331103 | consumed samples: 3763200 | consumed tokens: 7707033600 | elapsed time per iteration (s): 0.37 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.639489E+00 | grad norm: 0.314 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.785 | TFLOPs: 31.96 | 7: iteration 14800/ 331103 | consumed samples: 3788800 | consumed tokens: 7759462400 | elapsed time per iteration (s): 0.38 | learning rate: 1.995E-04 | global batch size: 256 | lm loss: 3.636475E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.416 | TFLOPs: 31.85 | 7: iteration 14900/ 331103 | consumed samples: 3814400 | consumed tokens: 7811891200 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.635658E+00 | grad norm: 0.297 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.827 | TFLOPs: 31.97 | 7: iteration 15000/ 331103 | consumed samples: 3840000 | consumed tokens: 7864320000 | elapsed time per iteration (s): 0.38 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.632935E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.088 | TFLOPs: 31.84 | 7: iteration 15100/ 331103 | consumed samples: 3865600 | consumed tokens: 7916748800 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.632839E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.305 | TFLOPs: 31.89 | 7: iteration 15200/ 331103 | consumed samples: 3891200 | consumed tokens: 7969177600 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.627014E+00 | grad norm: 0.276 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.871 | TFLOPs: 31.92 | 7: iteration 15300/ 331103 | consumed samples: 3916800 | consumed tokens: 8021606400 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.623877E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.563 | TFLOPs: 32.00 | 7: iteration 15400/ 331103 | consumed samples: 3942400 | consumed tokens: 8074035200 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.625806E+00 | grad norm: 0.309 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.820 | TFLOPs: 31.87 | 7: iteration 15500/ 331103 | consumed samples: 3968000 | consumed tokens: 8126464000 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.621915E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 687.127 | TFLOPs: 32.07 | 7: iteration 15600/ 331103 | consumed samples: 3993600 | consumed tokens: 8178892800 | elapsed time per iteration (s): 0.38 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.620678E+00 | grad norm: 0.263 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.733 | TFLOPs: 31.77 | 7: iteration 15700/ 331103 | consumed samples: 4019200 | consumed tokens: 8231321600 | elapsed time per iteration (s): 0.38 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.618488E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.240 | TFLOPs: 31.84 | 7: iteration 15800/ 331103 | consumed samples: 4044800 | consumed tokens: 8283750400 | elapsed time per iteration (s): 0.37 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 3.615672E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.897 | TFLOPs: 31.97 | 7: iteration 15900/ 331103 | consumed samples: 4070400 | consumed tokens: 8336179200 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.614998E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.139 | TFLOPs: 31.89 | 0: [2023-03-17 12:36:52,937] [INFO] [logging.py:68:log_dist] [Rank 0] step=16000, skipped=0, lr=[0.00019933529208396184, 0.00019933529208396184, 0.00019933529208396184], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 16000/ 331103 | consumed samples: 4096000 | consumed tokens: 8388608000 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.614309E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.499 | TFLOPs: 32.00 | 0: steps: 16000 loss: 3.6230 iter time (s): 0.372 samples/sec: 687.823 7: iteration 16100/ 331103 | consumed samples: 4121600 | consumed tokens: 8441036800 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.611995E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.825 | TFLOPs: 31.97 | 7: iteration 16200/ 331103 | consumed samples: 4147200 | consumed tokens: 8493465600 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.611889E+00 | grad norm: 0.268 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.097 | TFLOPs: 31.93 | 7: iteration 16300/ 331103 | consumed samples: 4172800 | consumed tokens: 8545894400 | elapsed time per iteration (s): 0.38 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.608791E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.808 | TFLOPs: 31.82 | 7: iteration 16400/ 331103 | consumed samples: 4198400 | consumed tokens: 8598323200 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.603819E+00 | grad norm: 0.304 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.425 | TFLOPs: 32.04 | 7: iteration 16500/ 331103 | consumed samples: 4224000 | consumed tokens: 8650752000 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.603286E+00 | grad norm: 0.306 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.472 | TFLOPs: 32.00 | 7: iteration 16600/ 331103 | consumed samples: 4249600 | consumed tokens: 8703180800 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.600392E+00 | grad norm: 0.331 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.667 | TFLOPs: 31.91 | 7: iteration 16700/ 331103 | consumed samples: 4275200 | consumed tokens: 8755609600 | elapsed time per iteration (s): 0.37 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 3.598329E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.136 | TFLOPs: 31.89 | 7: iteration 16800/ 331103 | consumed samples: 4300800 | consumed tokens: 8808038400 | elapsed time per iteration (s): 0.37 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.599718E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.804 | TFLOPs: 31.87 | 7: iteration 16900/ 331103 | consumed samples: 4326400 | consumed tokens: 8860467200 | elapsed time per iteration (s): 0.37 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.595290E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.650 | TFLOPs: 31.96 | 7: iteration 17000/ 331103 | consumed samples: 4352000 | consumed tokens: 8912896000 | elapsed time per iteration (s): 0.38 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.595411E+00 | grad norm: 0.345 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.491 | TFLOPs: 31.62 | 7: iteration 17100/ 331103 | consumed samples: 4377600 | consumed tokens: 8965324800 | elapsed time per iteration (s): 0.37 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.594903E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 685.824 | TFLOPs: 32.01 | 7: iteration 17200/ 331103 | consumed samples: 4403200 | consumed tokens: 9017753600 | elapsed time per iteration (s): 0.38 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.592975E+00 | grad norm: 0.314 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.020 | TFLOPs: 31.74 | 7: iteration 17300/ 331103 | consumed samples: 4428800 | consumed tokens: 9070182400 | elapsed time per iteration (s): 0.37 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.591157E+00 | grad norm: 0.321 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.081 | TFLOPs: 32.02 | 7: iteration 17400/ 331103 | consumed samples: 4454400 | consumed tokens: 9122611200 | elapsed time per iteration (s): 0.37 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.589514E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.121 | TFLOPs: 31.93 | 7: iteration 17500/ 331103 | consumed samples: 4480000 | consumed tokens: 9175040000 | elapsed time per iteration (s): 0.37 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.587875E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 687.604 | TFLOPs: 32.09 | 7: iteration 17600/ 331103 | consumed samples: 4505600 | consumed tokens: 9227468800 | elapsed time per iteration (s): 0.38 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 3.587416E+00 | grad norm: 0.325 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.051 | TFLOPs: 31.79 | 7: iteration 17700/ 331103 | consumed samples: 4531200 | consumed tokens: 9279897600 | elapsed time per iteration (s): 0.37 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.584591E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.523 | TFLOPs: 31.90 | 7: iteration 17800/ 331103 | consumed samples: 4556800 | consumed tokens: 9332326400 | elapsed time per iteration (s): 0.37 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.583283E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.701 | TFLOPs: 31.96 | 7: iteration 17900/ 331103 | consumed samples: 4582400 | consumed tokens: 9384755200 | elapsed time per iteration (s): 0.37 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.581010E+00 | grad norm: 0.320 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 690.105 | TFLOPs: 32.21 | 0: [2023-03-17 12:49:21,272] [INFO] [logging.py:68:log_dist] [Rank 0] step=18000, skipped=0, lr=[0.00019910961309073215, 0.00019910961309073215, 0.00019910961309073215], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 18000/ 331103 | consumed samples: 4608000 | consumed tokens: 9437184000 | elapsed time per iteration (s): 0.37 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.578739E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.749 | TFLOPs: 32.05 | 0: steps: 18000 loss: 3.5909 iter time (s): 0.372 samples/sec: 688.177 7: iteration 18100/ 331103 | consumed samples: 4633600 | consumed tokens: 9489612800 | elapsed time per iteration (s): 0.37 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.578163E+00 | grad norm: 0.319 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 688.304 | TFLOPs: 32.13 | 7: iteration 18200/ 331103 | consumed samples: 4659200 | consumed tokens: 9542041600 | elapsed time per iteration (s): 0.37 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.577359E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.321 | TFLOPs: 31.89 | 7: iteration 18300/ 331103 | consumed samples: 4684800 | consumed tokens: 9594470400 | elapsed time per iteration (s): 0.38 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.576289E+00 | grad norm: 0.310 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.548 | TFLOPs: 31.81 | 7: iteration 18400/ 331103 | consumed samples: 4710400 | consumed tokens: 9646899200 | elapsed time per iteration (s): 0.38 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 3.573613E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 677.735 | TFLOPs: 31.63 | 7: iteration 18500/ 331103 | consumed samples: 4736000 | consumed tokens: 9699328000 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.574578E+00 | grad norm: 0.306 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.520 | TFLOPs: 31.81 | 7: iteration 18600/ 331103 | consumed samples: 4761600 | consumed tokens: 9751756800 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.571280E+00 | grad norm: 0.355 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.054 | TFLOPs: 31.70 | 7: iteration 18700/ 331103 | consumed samples: 4787200 | consumed tokens: 9804185600 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.570726E+00 | grad norm: 0.309 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.758 | TFLOPs: 31.82 | 7: iteration 18800/ 331103 | consumed samples: 4812800 | consumed tokens: 9856614400 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.569207E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.822 | TFLOPs: 31.82 | 7: iteration 18900/ 331103 | consumed samples: 4838400 | consumed tokens: 9909043200 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.564820E+00 | grad norm: 0.338 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.465 | TFLOPs: 31.67 | 7: iteration 19000/ 331103 | consumed samples: 4864000 | consumed tokens: 9961472000 | elapsed time per iteration (s): 0.37 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.568119E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 688.038 | TFLOPs: 32.12 | 7: iteration 19100/ 331103 | consumed samples: 4889600 | consumed tokens: 10013900800 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.567083E+00 | grad norm: 0.312 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.254 | TFLOPs: 31.85 | 7: iteration 19200/ 331103 | consumed samples: 4915200 | consumed tokens: 10066329600 | elapsed time per iteration (s): 0.38 | learning rate: 1.990E-04 | global batch size: 256 | lm loss: 3.565630E+00 | grad norm: 0.307 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.658 | TFLOPs: 31.77 | 7: iteration 19300/ 331103 | consumed samples: 4940800 | consumed tokens: 10118758400 | elapsed time per iteration (s): 0.37 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.564744E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 686.329 | TFLOPs: 32.04 | 7: iteration 19400/ 331103 | consumed samples: 4966400 | consumed tokens: 10171187200 | elapsed time per iteration (s): 0.37 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.558977E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.810 | TFLOPs: 31.87 | 7: iteration 19500/ 331103 | consumed samples: 4992000 | consumed tokens: 10223616000 | elapsed time per iteration (s): 0.38 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.560199E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.712 | TFLOPs: 31.82 | 7: iteration 19600/ 331103 | consumed samples: 5017600 | consumed tokens: 10276044800 | elapsed time per iteration (s): 0.37 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.558405E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 688.531 | TFLOPs: 32.14 | 7: iteration 19700/ 331103 | consumed samples: 5043200 | consumed tokens: 10328473600 | elapsed time per iteration (s): 0.37 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.558007E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.836 | TFLOPs: 31.92 | 7: iteration 19800/ 331103 | consumed samples: 5068800 | consumed tokens: 10380902400 | elapsed time per iteration (s): 0.37 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.555179E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 684.150 | TFLOPs: 31.93 | 7: iteration 19900/ 331103 | consumed samples: 5094400 | consumed tokens: 10433331200 | elapsed time per iteration (s): 0.37 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.556046E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.907 | TFLOPs: 31.92 | 0: [2023-03-17 13:01:51,076] [INFO] [logging.py:68:log_dist] [Rank 0] step=20000, skipped=0, lr=[0.00019885119456175047, 0.00019885119456175047, 0.00019885119456175047], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 20000/ 331103 | consumed samples: 5120000 | consumed tokens: 10485760000 | elapsed time per iteration (s): 0.38 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 3.551610E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 681.431 | TFLOPs: 31.81 | 0: steps: 20000 loss: 3.5638 iter time (s): 0.373 samples/sec: 686.968 7: ------------------------------------------------------------------------------------------------ 7: validation loss at iteration 20000 | lm loss value: 3.876791E+00 | lm loss PPL: 4.826905E+01 | 7: ------------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 20000 to checkpoints_146m174b100mdedup 0: [2023-03-17 13:01:51,210] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step20000 is begin to save! 0: [2023-03-17 13:03:56,094] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_01-model_00-model_states.pt... 0: [2023-03-17 13:03:56,220] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_01-model_00-model_states.pt. 0: [2023-03-17 13:03:56,220] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_03-model_00-model_states.pt... 0: [2023-03-17 13:03:56,236] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_03-model_00-model_states.pt. 0: [2023-03-17 13:03:56,237] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_04-model_00-model_states.pt... 0: [2023-03-17 13:03:56,253] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_04-model_00-model_states.pt. 0: [2023-03-17 13:03:56,253] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_05-model_00-model_states.pt... 0: [2023-03-17 13:03:56,269] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_05-model_00-model_states.pt. 0: [2023-03-17 13:03:56,270] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_06-model_00-model_states.pt... 0: [2023-03-17 13:03:56,285] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_06-model_00-model_states.pt. 0: [2023-03-17 13:03:56,286] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_07-model_00-model_states.pt... 0: [2023-03-17 13:03:56,301] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_07-model_00-model_states.pt. 0: [2023-03-17 13:03:56,302] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_08-model_00-model_states.pt... 0: [2023-03-17 13:03:56,317] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_08-model_00-model_states.pt. 0: [2023-03-17 13:03:56,318] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_09-model_00-model_states.pt... 0: [2023-03-17 13:03:56,333] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_09-model_00-model_states.pt. 0: [2023-03-17 13:03:56,334] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_10-model_00-model_states.pt... 0: [2023-03-17 13:03:56,349] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_10-model_00-model_states.pt. 0: [2023-03-17 13:03:56,350] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_11-model_00-model_states.pt... 0: [2023-03-17 13:03:56,365] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_11-model_00-model_states.pt. 0: [2023-03-17 13:03:56,365] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_12-model_00-model_states.pt... 0: [2023-03-17 13:03:56,381] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_12-model_00-model_states.pt. 0: [2023-03-17 13:03:56,381] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_13-model_00-model_states.pt... 0: [2023-03-17 13:03:56,397] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_13-model_00-model_states.pt. 0: [2023-03-17 13:03:56,397] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_14-model_00-model_states.pt... 0: [2023-03-17 13:03:56,413] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_14-model_00-model_states.pt. 0: [2023-03-17 13:03:56,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_15-model_00-model_states.pt... 0: [2023-03-17 13:03:56,429] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_15-model_00-model_states.pt. 0: [2023-03-17 13:03:56,429] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_16-model_00-model_states.pt... 0: [2023-03-17 13:03:56,444] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_16-model_00-model_states.pt. 0: [2023-03-17 13:03:56,445] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_17-model_00-model_states.pt... 0: [2023-03-17 13:03:56,460] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_17-model_00-model_states.pt. 0: [2023-03-17 13:03:56,460] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/layer_19-model_00-model_states.pt... 0: [2023-03-17 13:03:56,461] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/layer_19-model_00-model_states.pt. 0: [2023-03-17 13:03:56,462] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_146m174b100mdedup/global_step20000/mp_rank_00_model_states.pt 0: [2023-03-17 13:03:56,462] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/mp_rank_00_model_states.pt... 0: [2023-03-17 13:03:56,464] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/mp_rank_00_model_states.pt. 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 3: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 5: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 4: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 1: [2023-03-17 13:03:56,489] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 0: [2023-03-17 13:03:56,525] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,526] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,526] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,526] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,531] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,531] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,531] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,531] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,531] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,532] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,532] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,534] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2023-03-17 13:03:56,534] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,534] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: [2023-03-17 13:03:56,553] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2023-03-17 13:03:56,553] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 2: [2023-03-17 13:03:56,562] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,567] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 6: [2023-03-17 13:03:56,568] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,570] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,570] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,570] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,570] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,570] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,570] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,570] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 1: [2023-03-17 13:03:56,575] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 7: [2023-03-17 13:03:56,578] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2023-03-17 13:03:56,578] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,587] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,587] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,588] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2023-03-17 13:03:56,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 3: [2023-03-17 13:03:56,588] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,594] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,594] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 4: [2023-03-17 13:03:56,595] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_146m174b100mdedup/global_step20000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 5: [2023-03-17 13:03:56,608] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step20000 is ready now! 0: successfully saved checkpoint at iteration 20000 to checkpoints_146m174b100mdedup 7: time (ms) | save-checkpoint: 125417.70 7: iteration 20100/ 331103 | consumed samples: 5145600 | consumed tokens: 10538188800 | elapsed time per iteration (s): 1.65 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.551216E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 154.729 | TFLOPs: 7.22 | 7: iteration 20200/ 331103 | consumed samples: 5171200 | consumed tokens: 10590617600 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.547782E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 674.069 | TFLOPs: 31.46 | 7: iteration 20300/ 331103 | consumed samples: 5196800 | consumed tokens: 10643046400 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.550424E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 682.566 | TFLOPs: 31.86 | 7: iteration 20400/ 331103 | consumed samples: 5222400 | consumed tokens: 10695475200 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.548474E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 678.624 | TFLOPs: 31.68 | 7: iteration 20500/ 331103 | consumed samples: 5248000 | consumed tokens: 10747904000 | elapsed time per iteration (s): 0.37 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.546546E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.015 | TFLOPs: 31.88 | 7: iteration 20600/ 331103 | consumed samples: 5273600 | consumed tokens: 10800332800 | elapsed time per iteration (s): 0.38 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.545931E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 680.793 | TFLOPs: 31.78 | 7: iteration 20700/ 331103 | consumed samples: 5299200 | consumed tokens: 10852761600 | elapsed time per iteration (s): 0.37 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 3.548520E+00 | grad norm: 0.293 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 687.208 | TFLOPs: 32.08 | 7: iteration 20800/ 331103 | consumed samples: 5324800 | consumed tokens: 10905190400 | elapsed time per iteration (s): 0.37 | learning rate: 1.987E-04 | global batch size: 256 | lm loss: 3.543840E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 683.934 | TFLOPs: 31.92 | 7: iteration 20900/ 331103 | consumed samples: 5350400 | consumed tokens: 10957619200 | elapsed time per iteration (s): 0.38 | learning rate: 1.987E-04 | global batch size: 256 | lm loss: 3.544648E+00 | grad norm: 0.302 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 679.880 | TFLOPs: 31.73 |