|
Model parameters: d_model 224 ffw_size 896 kv_size 32 n_heads 7 n_layers 4 |
|
Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 4 --hidden-size 224 --num-attention-heads 7 --kv-channels 32 --ffn-hidden-size 896 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 32 --global-batch-size 256 --train-samples 195_313 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-14m400m100m --bf16 --checkpoint-activations --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 195_313 --lr-warmup-samples 1953 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 1000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_14m400m100m --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_14m400m100m --load checkpoints_14m400m100m --train-weighted-split-paths-path train100m.txt --valid-weighted-split-paths-path val.txt --data-impl mmap --deepspeed --deepspeed_config ds_configs/3423734.json --zero-stage 0 |
|
START 3423734: Thu 27 Apr 2023 03:36:35 PM EEST |
|
0: |
|
0: |
|
0: ======================= ROCm System Management Interface ======================= |
|
0: ================================= Concise Info ================================= |
|
0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% |
|
0: 0 47.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: 2 37.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 3 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: 4 47.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: 6 41.0c 83.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% |
|
0: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% |
|
0: ================================================================================ |
|
0: ============================= End of ROCm SMI Log ============================== |
|
0: Launching on nid005141 (0/1), master nid005141 port 9999, GPUs 8, CUDA: True |
|
0: using world size: 8, data-parallel-size: 8, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 |
|
0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. |
|
0: using torch.bfloat16 for parameters ... |
|
0: ------------------------ arguments ------------------------ |
|
0: abort_on_unmet_fused_kernel_constraints ......... False |
|
0: accumulate_allreduce_grads_in_fp32 .............. True |
|
0: adam_beta1 ...................................... 0.9 |
|
0: adam_beta2 ...................................... 0.999 |
|
0: adam_eps ........................................ 1e-08 |
|
0: adlr_autoresume ................................. False |
|
0: adlr_autoresume_interval ........................ 1000 |
|
0: apply_query_key_layer_scaling ................... True |
|
0: apply_residual_connection_post_layernorm ........ False |
|
0: attention_dropout ............................... 0.1 |
|
0: attention_softmax_in_fp32 ....................... False |
|
0: bert_binary_head ................................ True |
|
0: bert_load ....................................... None |
|
0: bf16 ............................................ True |
|
0: bias_dropout_fusion ............................. True |
|
0: bias_gelu_fusion ................................ True |
|
0: biencoder_projection_dim ........................ 0 |
|
0: biencoder_shared_query_context_model ............ False |
|
0: block_data_path ................................. None |
|
0: checkpoint_activations .......................... True |
|
0: checkpoint_in_cpu ............................... False |
|
0: checkpoint_num_layers ........................... 1 |
|
0: clip_grad ....................................... 1.0 |
|
0: codecarbon_dir .................................. None |
|
0: consumed_train_samples .......................... 0 |
|
0: consumed_train_tokens ........................... 0 |
|
0: consumed_valid_samples .......................... 0 |
|
0: contigious_checkpointing ........................ False |
|
0: cpu_optimizer ................................... False |
|
0: cpu_torch_adam .................................. False |
|
0: curriculum_learning ............................. False |
|
0: data_impl ....................................... mmap |
|
0: data_parallel_size .............................. 8 |
|
0: data_path ....................................... None |
|
0: dataloader_type ................................. single |
|
0: DDP_impl ........................................ local |
|
0: decoder_seq_length .............................. None |
|
0: deepscale ....................................... False |
|
0: deepscale_config ................................ None |
|
0: deepspeed ....................................... True |
|
0: deepspeed_activation_checkpointing .............. False |
|
0: deepspeed_config ................................ ds_configs/3423734.json |
|
0: deepspeed_mpi ................................... False |
|
0: distribute_checkpointed_activations ............. False |
|
0: distributed_backend ............................. nccl |
|
0: embed_layernorm ................................. False |
|
0: embedding_path .................................. None |
|
0: encoder_seq_length .............................. 2048 |
|
0: eod_mask_loss ................................... False |
|
0: eval_interval ................................... 1000 |
|
0: eval_iters ...................................... 1 |
|
0: eval_only ....................................... None |
|
0: evidence_data_path .............................. None |
|
0: exit_duration_in_mins ........................... None |
|
0: exit_interval ................................... None |
|
0: ffn_hidden_size ................................. 896 |
|
0: finetune ........................................ False |
|
0: fp16 ............................................ False |
|
0: fp16_lm_cross_entropy ........................... False |
|
0: fp32_residual_connection ........................ False |
|
0: gigaflos_no_embeds .............................. 0 |
|
0: global_batch_size ............................... 256 |
|
0: glu_activation .................................. None |
|
0: hidden_dropout .................................. 0.1 |
|
0: hidden_size ..................................... 224 |
|
0: hysteresis ...................................... 2 |
|
0: ict_head_size ................................... None |
|
0: ict_load ........................................ None |
|
0: img_dim ......................................... 224 |
|
0: indexer_batch_size .............................. 128 |
|
0: indexer_log_interval ............................ 1000 |
|
0: inference ....................................... False |
|
0: init_method_std ................................. 0.02 |
|
0: init_method_xavier_uniform ...................... False |
|
0: initial_loss_scale .............................. 4294967296 |
|
0: kill_switch_path ................................ kill-switch-14m400m100m |
|
0: kv_channels ..................................... 32 |
|
0: layer_norm_fusion ............................... True |
|
0: layernorm_epsilon ............................... 1e-05 |
|
0: lazy_mpu_init ................................... None |
|
0: load ............................................ checkpoints_14m400m100m |
|
0: local_rank ...................................... None |
|
0: log_batch_size_to_tensorboard ................... True |
|
0: log_interval .................................... 10 |
|
0: log_learning_rate_to_tensorboard ................ True |
|
0: log_level ....................................... None |
|
0: log_level_replica ............................... None |
|
0: log_loss_scale_to_tensorboard ................... True |
|
0: log_num_zeros_in_grad ........................... False |
|
0: log_params_norm ................................. False |
|
0: log_path ........................................ None |
|
0: log_timers_to_tensorboard ....................... True |
|
0: log_validation_ppl_to_tensorboard ............... True |
|
0: loss_on_targets_only ............................ False |
|
0: loss_scale ...................................... 12.0 |
|
0: loss_scale_window ............................... 1000 |
|
0: lr .............................................. 0.0002 |
|
0: lr_decay_iters .................................. None |
|
0: lr_decay_samples ................................ 195313 |
|
0: lr_decay_style .................................. cosine |
|
0: lr_decay_tokens ................................. None |
|
0: lr_warmup_fraction .............................. None |
|
0: lr_warmup_iters ................................. 0 |
|
0: lr_warmup_samples ............................... 1953 |
|
0: make_vocab_size_divisible_by .................... 128 |
|
0: mask_prob ....................................... 0.15 |
|
0: masked_softmax_fusion ........................... True |
|
0: max_position_embeddings ......................... 2048 |
|
0: mean_noise_span_length .......................... None |
|
0: memory_centric_tiled_linear ..................... False |
|
0: merge_file ...................................... gpt2/merges.txt |
|
0: micro_batch_size ................................ 32 |
|
0: min_loss_scale .................................. 1.0 |
|
0: min_lr .......................................... 2e-05 |
|
0: mmap_warmup ..................................... False |
|
0: no_load_optim ................................... None |
|
0: no_load_rng ..................................... None |
|
0: no_save_optim ................................... None |
|
0: no_save_rng ..................................... None |
|
0: noise_density ................................... None |
|
0: num_attention_heads ............................. 7 |
|
0: num_channels .................................... 3 |
|
0: num_classes ..................................... 1000 |
|
0: num_layers ...................................... 4 |
|
0: num_layers_per_virtual_pipeline_stage ........... None |
|
0: num_workers ..................................... 2 |
|
0: onnx_safe ....................................... None |
|
0: openai_gelu ..................................... False |
|
0: optimizer ....................................... adam |
|
0: optimizer_fusion ................................ True |
|
0: override_lr_scheduler ........................... False |
|
0: pad_vocab_size_to ............................... None |
|
0: params_dtype .................................... torch.bfloat16 |
|
0: partition_activations ........................... False |
|
0: patch_dim ....................................... 16 |
|
0: pipeline_model_parallel_size .................... 1 |
|
0: position_embedding_type ......................... PositionEmbeddingType.absolute |
|
0: pp_partition_method ............................. None |
|
0: profile_backward ................................ False |
|
0: query_in_block_prob ............................. 0.1 |
|
0: rampup_batch_size ............................... None |
|
0: rank ............................................ 0 |
|
0: remote_device ................................... none |
|
0: reset_attention_mask ............................ False |
|
0: reset_position_ids .............................. False |
|
0: reset_progress .................................. None |
|
0: retriever_report_topk_accuracies ................ [] |
|
0: retriever_score_scaling ......................... False |
|
0: retriever_seq_length ............................ 256 |
|
0: reweight_loss_based_on_position_frequency ....... False |
|
0: sample_rate ..................................... 1.0 |
|
0: save ............................................ checkpoints_14m400m100m |
|
0: save_interval ................................... 1000 |
|
0: scatter_gather_tensors_in_pipeline .............. True |
|
0: scattered_embeddings ............................ False |
|
0: seed ............................................ 1234 |
|
0: seq_length ...................................... 2048 |
|
0: sgd_momentum .................................... 0.9 |
|
0: short_seq_prob .................................. 0.1 |
|
0: skip_train_iteration_range ...................... None |
|
0: split ........................................... None |
|
0: split_transformers .............................. False |
|
0: sync_tp_duplicated_parameters ................... False |
|
0: synchronize_each_layer .......................... False |
|
0: tensor_model_parallel_size ...................... 1 |
|
0: tensorboard_dir ................................. tensorboard_14m400m100m |
|
0: tensorboard_log_interval ........................ 1 |
|
0: tensorboard_queue_size .......................... 5 |
|
0: test_weighted_split_paths ....................... None |
|
0: test_weighted_split_paths_path .................. None |
|
0: tile_factor ..................................... 1 |
|
0: titles_data_path ................................ None |
|
0: tokenizer_name_or_path .......................... None |
|
0: tokenizer_type .................................. GPT2BPETokenizer |
|
0: train_iters ..................................... None |
|
0: train_samples ................................... 195313 |
|
0: train_tokens .................................... None |
|
0: train_weighted_split_names ...................... ['train'] |
|
0: train_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document']] |
|
0: train_weighted_split_paths_path ................. None |
|
0: train_weighted_split_splits ..................... [['0:1']] |
|
0: train_weighted_split_weights .................... [['1.0']] |
|
0: universal_checkpoint ............................ False |
|
0: use_bnb_optimizer ............................... False |
|
0: use_checkpoint_lr_scheduler ..................... False |
|
0: use_contiguous_buffers_in_ddp ................... True |
|
0: use_cpu_initialization .......................... None |
|
0: use_one_sent_docs ............................... False |
|
0: use_pin_memory .................................. False |
|
0: valid_num_workers ............................... 2 |
|
0: valid_weighted_split_names ...................... ['validation'] |
|
0: valid_weighted_split_paths ...................... [['/scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document']] |
|
0: valid_weighted_split_paths_path ................. None |
|
0: valid_weighted_split_splits ..................... [['0:1']] |
|
0: valid_weighted_split_weights .................... [['1.0']] |
|
0: virtual_pipeline_model_parallel_size ............ None |
|
0: vocab_extra_ids ................................. 0 |
|
0: vocab_file ...................................... gpt2/vocab.json |
|
0: weight_decay .................................... 0.1 |
|
0: world_size ...................................... 8 |
|
0: zero_allgather_bucket_size ...................... 0.0 |
|
0: zero_contigious_gradients ....................... False |
|
0: zero_reduce_bucket_size ......................... 0.0 |
|
0: zero_reduce_scatter ............................. False |
|
0: zero_stage ...................................... 0 |
|
0: -------------------- end of arguments --------------------- |
|
0: setting number of micro-batches to constant 1 |
|
0: > building GPT2BPETokenizer tokenizer ... |
|
0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) |
|
0: DeepSpeed general environment info: |
|
0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] |
|
0: torch version .................... 1.13.0+rocm5.2 |
|
0: torch cuda version ............... None |
|
0: torch hip version ................ 5.2.21151-afdc89f8 |
|
0: nvcc version ..................... None |
|
0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] |
|
0: deepspeed info ................... 0.7.5, unknown, unknown |
|
0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 |
|
0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** |
|
0: > initializing torch distributed ... |
|
0: [2023-04-27 15:38:51,769] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl |
|
0: > setting tensorboard ... |
|
0: > initializing tensor model parallel with size 1 |
|
0: > initializing pipeline model parallel with size 1 |
|
0: > setting random seeds to 1234 ... |
|
0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 |
|
0: > compiling dataset index builder ... |
|
0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' |
|
0: make: Nothing to be done for 'default'. |
|
0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' |
|
0: >>> done with dataset index builder. Compilation time: 0.112 seconds |
|
0: > compiling and loading fused kernels ... |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] |
|
0: Total number of unsupported CUDA function calls: 0 |
|
0: |
|
0: |
|
0: Total number of replaced kernel launches: 87 |
|
0: ninja: no work to do. |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] |
|
0: Total number of unsupported CUDA function calls: 0 |
|
0: |
|
0: |
|
0: Total number of replaced kernel launches: 63 |
|
0: ninja: no work to do. |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] |
|
0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] |
|
0: Total number of unsupported CUDA function calls: 0 |
|
0: |
|
0: |
|
0: Total number of replaced kernel launches: 67 |
|
0: [1/1] c++ layer_norm_hip_kernel.cuda.o layer_norm_cuda.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/opt/rocm/lib -lamdhip64 -o fused_mix_prec_layer_norm_cuda.so |
|
0: >>> done with compiling and loading fused kernels. Compilation time: 10.743 seconds |
|
0: time to initialize megatron (seconds): -24.979 |
|
0: [after megatron is initialized] datetime: 2023-04-27 15:39:03 |
|
0: building GPT model ... |
|
0: [2023-04-27 15:39:03,148] [INFO] [utils.py:827:see_memory_usage] Before Building Model |
|
0: [2023-04-27 15:39:03,149] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:03,149] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.57 GB, percent = 7.5% |
|
0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None |
|
0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7} |
|
0: [2023-04-27 15:39:03,378] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer |
|
0: stage=0 layers=11 |
|
0: 0: _to_float16 |
|
0: 1: EmbeddingPipe |
|
0: 2: <lambda> |
|
0: 3: ParallelTransformerLayerPipe |
|
0: 4: ParallelTransformerLayerPipe |
|
0: 5: ParallelTransformerLayerPipe |
|
0: 6: ParallelTransformerLayerPipe |
|
0: 7: undo |
|
0: 8: MixedFusedLayerNorm |
|
0: 9: EmbeddingPipe |
|
0: 10: float16_to_fp32 |
|
0: loss: CrossEntropy |
|
0: [2023-04-27 15:39:03,577] [INFO] [utils.py:827:see_memory_usage] After Building Model |
|
0: [2023-04-27 15:39:03,577] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:03,577] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 37.58 GB, percent = 7.5% |
|
0: setting training iterations to 762 |
|
0: > learning rate decay style: cosine |
|
0: DeepSpeed is enabled. |
|
0: [2023-04-27 15:39:03,578] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown |
|
0: [2023-04-27 15:39:08,175] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False |
|
0: [2023-04-27 15:39:08,175] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer |
|
0: [2023-04-27 15:39:08,175] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer |
|
0: [2023-04-27 15:39:08,176] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam |
|
0: [2023-04-27 15:39:08,176] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer |
|
0: [2023-04-27 15:39:08,295] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer |
|
0: [2023-04-27 15:39:08,296] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:08,296] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 41.07 GB, percent = 8.2% |
|
0: ninja: no work to do. |
|
0: Time to load utils op: 0.5155799388885498 seconds |
|
0: Time to load utils op: 0.6356937885284424 seconds |
|
0: Time to load utils op: 0.6354517936706543 seconds |
|
0: Time to load utils op: 0.6361770629882812 secondsTime to load utils op: 0.6364157199859619 seconds |
|
0: |
|
0: Time to load utils op: 0.6359162330627441 seconds |
|
0: Time to load utils op: 0.6366944313049316 seconds |
|
0: Time to load utils op: 0.6369485855102539 seconds |
|
0: [2023-04-27 15:39:08,926] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 |
|
0: [2023-04-27 15:39:08,926] [INFO] [utils.py:828:see_memory_usage] MA 0.03 GB Max_MA 0.03 GB CA 0.05 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:08,926] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 40.44 GB, percent = 8.0% |
|
0: [2023-04-27 15:39:09,804] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 |
|
0: [2023-04-27 15:39:09,805] [INFO] [utils.py:828:see_memory_usage] MA 0.08 GB Max_MA 0.08 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:09,805] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 39.22 GB, percent = 7.8% |
|
0: Time to load utils op: 0.005361318588256836 secondsTime to load utils op: 0.005542755126953125 seconds |
|
0: |
|
0: Time to load utils op: 0.00555419921875 seconds |
|
0: Time to load utils op: 0.005349397659301758 seconds |
|
0: Time to load utils op: 0.005378007888793945 seconds |
|
0: Time to load utils op: 0.005393028259277344 seconds |
|
0: Time to load utils op: 0.005184650421142578 seconds |
|
0: [2023-04-27 15:39:09,926] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 |
|
0: [2023-04-27 15:39:09,926] [INFO] [utils.py:828:see_memory_usage] MA 0.08 GB Max_MA 0.08 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:09,927] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 39.14 GB, percent = 7.8% |
|
0: [2023-04-27 15:39:10,035] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 |
|
0: [2023-04-27 15:39:10,036] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:10,036] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.91 GB, percent = 7.7% |
|
0: [2023-04-27 15:39:10,140] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 |
|
0: [2023-04-27 15:39:10,140] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:10,141] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.66 GB, percent = 7.7% |
|
0: [2023-04-27 15:39:10,245] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 |
|
0: [2023-04-27 15:39:10,245] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:10,245] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.52 GB, percent = 7.7% |
|
0: [2023-04-27 15:39:10,348] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer |
|
0: [2023-04-27 15:39:10,348] [INFO] [utils.py:828:see_memory_usage] MA 0.09 GB Max_MA 0.09 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:10,348] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.31 GB, percent = 7.6% |
|
0: [2023-04-27 15:39:10,463] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer |
|
0: [2023-04-27 15:39:10,463] [INFO] [utils.py:828:see_memory_usage] MA 0.1 GB Max_MA 0.1 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:10,464] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.16 GB, percent = 7.6% |
|
0: [2023-04-27 15:39:10,566] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer |
|
0: [2023-04-27 15:39:10,566] [INFO] [utils.py:828:see_memory_usage] MA 0.1 GB Max_MA 0.1 GB CA 0.12 GB Max_CA 0 GB |
|
0: [2023-04-27 15:39:10,566] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 38.0 GB, percent = 7.6% |
|
0: [2023-04-27 15:39:10,566] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam |
|
0: [2023-04-27 15:39:10,567] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler |
|
0: [2023-04-27 15:39:10,567] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = <megatron.learning_rates.AnnealingLR object at 0x14d3e1c91fd0> |
|
0: [2023-04-27 15:39:10,567] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] |
|
0: [2023-04-27 15:39:10,567] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: |
|
0: [2023-04-27 15:39:10,567] [INFO] [config.py:1011:print] activation_checkpointing_config { |
|
0: "partition_activations": false, |
|
0: "contiguous_memory_optimization": false, |
|
0: "cpu_checkpointing": false, |
|
0: "number_checkpoints": null, |
|
0: "synchronize_checkpoint_boundary": false, |
|
0: "profile": false |
|
0: } |
|
0: [2023-04-27 15:39:10,567] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} |
|
0: [2023-04-27 15:39:10,567] [INFO] [config.py:1011:print] amp_enabled .................. False |
|
0: [2023-04-27 15:39:10,567] [INFO] [config.py:1011:print] amp_params ................... False |
|
0: [2023-04-27 15:39:10,567] [INFO] [config.py:1011:print] autotuning_config ............ { |
|
0: "enabled": false, |
|
0: "start_step": null, |
|
0: "end_step": null, |
|
0: "metric_path": null, |
|
0: "arg_mappings": null, |
|
0: "metric": "throughput", |
|
0: "model_info": null, |
|
0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", |
|
0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", |
|
0: "overwrite": true, |
|
0: "fast": true, |
|
0: "start_profile_step": 3, |
|
0: "end_profile_step": 5, |
|
0: "tuner_type": "gridsearch", |
|
0: "tuner_early_stopping": 5, |
|
0: "tuner_num_trials": 50, |
|
0: "model_info_path": null, |
|
0: "mp_size": 1, |
|
0: "max_train_batch_size": null, |
|
0: "min_train_batch_size": 1, |
|
0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, |
|
0: "min_train_micro_batch_size_per_gpu": 1, |
|
0: "num_tuning_micro_batch_sizes": 3 |
|
0: } |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] bfloat16_enabled ............. True |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x14d3e1c91d30> |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] communication_data_type ...... None |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa |
|
0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] curriculum_enabled ........... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] curriculum_params ............ False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] dataloader_drop_last ......... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] disable_allgather ............ False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] dump_state ................... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] elasticity_enabled ........... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] flops_profiler_config ........ { |
|
0: "enabled": false, |
|
0: "profile_step": 1, |
|
0: "module_depth": -1, |
|
0: "top_modules": 1, |
|
0: "detailed": true, |
|
0: "output_file": null |
|
0: } |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] fp16_auto_cast ............... None |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] fp16_enabled ................. False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] global_rank .................. 0 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] load_universal_checkpoint .... False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] loss_scale ................... 1.0 |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] memory_breakdown ............. False |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] monitor_config ............... <deepspeed.monitor.config.DeepSpeedMonitorConfig object at 0x14d3e1c91ca0> |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] nebula_config ................ { |
|
0: "enabled": false, |
|
0: "persistent_storage_path": null, |
|
0: "persistent_time_interval": 100, |
|
0: "num_of_version_in_retention": 2, |
|
0: "enable_nebula_load": true, |
|
0: "load_path": null |
|
0: } |
|
0: [2023-04-27 15:39:10,568] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] optimizer_name ............... None |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] optimizer_params ............. None |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] pld_enabled .................. False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] pld_params ................... False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] prescale_gradients ........... False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] scheduler_name ............... None |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] scheduler_params ............. None |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] sparse_attention ............. None |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] steps_per_print .............. 2000 |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] train_batch_size ............. 256 |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 32 |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] use_node_local_storage ....... False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] world_size ................... 8 |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] zero_enabled ................. False |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 |
|
0: [2023-04-27 15:39:10,569] [INFO] [config.py:996:print_user_config] json = { |
|
0: "train_micro_batch_size_per_gpu": 32, |
|
0: "train_batch_size": 256, |
|
0: "gradient_clipping": 1.0, |
|
0: "zero_optimization": { |
|
0: "stage": 0 |
|
0: }, |
|
0: "bf16": { |
|
0: "enabled": true |
|
0: }, |
|
0: "steps_per_print": 2.000000e+03, |
|
0: "wall_clock_breakdown": false |
|
0: } |
|
0: Time to load utils op: 0.0004215240478515625 seconds |
|
0: [2023-04-27 15:39:10,570] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=32 |
|
0: [2023-04-27 15:39:10,711] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=11 [0, 11) STAGE_PARAMS=14147392 (14.147M) TOTAL_PARAMS=14147392 (14.147M) UNIQUE_PARAMS=14147392 (14.147M) |
|
0: [2023-04-27 15:39:10,713] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: WARNING: could not find the metadata file checkpoints_14m400m100m |
|
0: will not load any checkpoints and will start from random |
|
0: [2023-04-27 15:39:10,713] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 15:39:10,713] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 15:39:10,713] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 15:39:10,713] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 15:39:10,713] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 15:39:10,714] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: [2023-04-27 15:39:10,714] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_14m400m100m/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. |
|
0: time (ms) | load-checkpoint: 1.15 |
|
0: estimated model parameters: 0.014147392 |
|
0: estimated model parameters without embeddings: 0.002420544 |
|
0: [after model, optimizer, and learning rate scheduler are built] datetime: 2023-04-27 15:39:11 |
|
0: > building train, validation, and test datasets ... |
|
0: > datasets target sizes (minimum size): |
|
0: train: 195313 |
|
0: validation: 256 |
|
0: test: 256 |
|
0: > building train, validation, and test datasets for GPT ... |
|
0: > building dataset index ... |
|
0: reading sizes... |
|
0: reading pointers... |
|
0: reading document index... |
|
0: creating numpy buffer of mmap... |
|
0: creating memory view of numpy buffer... |
|
0: > finished creating indexed dataset in 0.033320 seconds |
|
0: number of documents: 208931 |
|
0: > dataset split: |
|
0: train: |
|
0: document indices in [0, 208931) total of 208931 documents |
|
0: > WARNING: could not find index map files, building the indices on rank 0 ... |
|
0: > last epoch number of samples (94) is smaller than 95.0% of number of samples per epoch (48804), setting separate_last_epoch to True |
|
0: > elasped time to build and save doc-idx mapping (seconds): 0.081743 |
|
0: using: |
|
0: number of documents: 208931 |
|
0: number of epochs: 5 |
|
0: sequence length: 2048 |
|
0: total number of samples: 244024 |
|
0: > elasped time to build and save sample-idx mapping (seconds): 0.045406 |
|
0: > building shuffle index with split [0, 195219) and [195219, 244024) ... |
|
0: > elasped time to build and save shuffle-idx mapping (seconds): 0.006500 |
|
0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_195313ns_2048sl_1234s_doc_idx.npy |
|
0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_195313ns_2048sl_1234s_sample_idx.npy |
|
0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_subsampled/gpt2tok_c4_en_100M_text_document_train_indexmap_195313ns_2048sl_1234s_shuffle_idx.npy |
|
0: loaded indexed file in 0.002 seconds |
|
0: total number of samples: 244025 |
|
0: total number of epochs: 5 |
|
0: > building dataset index ... |
|
0: reading sizes... |
|
0: reading pointers... |
|
0: reading document index... |
|
0: creating numpy buffer of mmap... |
|
0: creating memory view of numpy buffer... |
|
0: > finished creating indexed dataset in 0.038422 seconds |
|
0: number of documents: 364608 |
|
0: > dataset split: |
|
0: validation: |
|
0: document indices in [0, 364608) total of 364608 documents |
|
0: > loading doc-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_doc_idx.npy |
|
0: > loading sample-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_sample_idx.npy |
|
0: > loading shuffle-idx mapping from /scratch/project_462000119/data/c4_validation/gpt2tok_c4validation_rerun_text_document_validation_indexmap_256ns_2048sl_1234s_shuffle_idx.npy |
|
0: loaded indexed file in 0.125 seconds |
|
0: total number of samples: 84978 |
|
0: total number of epochs: 1 |
|
0: > finished creating GPT datasets ... |
|
0: [after dataloaders are built] datetime: 2023-04-27 15:39:19 |
|
0: done with setup ... |
|
0: training ... |
|
0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: |
|
0: time (ms) | model-and-optimizer-setup: 8297.11 | train/valid/test-data-iterators-setup: 8101.39 |
|
0: [000-000] 0.0141B / 0.0024B |
|
0: [before the start of training step] datetime: 2023-04-27 15:39:19 |
|
0: [2023-04-27 15:39:20,213] [INFO] [checkpointing.py:553:forward] Activation Checkpointing Information |
|
0: [2023-04-27 15:39:20,213] [INFO] [checkpointing.py:554:forward] ----Partition Activations False, CPU CHECKPOINTING False |
|
0: [2023-04-27 15:39:20,213] [INFO] [checkpointing.py:557:forward] ----contiguous Memory Checkpointing False with None total layers |
|
0: [2023-04-27 15:39:20,213] [INFO] [checkpointing.py:560:forward] ----Synchronization False |
|
0: [2023-04-27 15:39:20,213] [INFO] [checkpointing.py:561:forward] ----Profiling time in checkpointing False |
|
0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 12710.28759765625 | max allocated: 31761.787109375 | reserved: 39838.0 | max reserved: 39838.0 |
|
0: iteration 10/ 762 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 1.09 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 1.061609E+01 | grad norm: 1.244 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 235.427 | TFLOPs: 7.01 | |
|
0: iteration 20/ 762 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 0.47 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 1.004091E+01 | grad norm: 1.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 543.302 | TFLOPs: 16.17 | |
|
0: iteration 30/ 762 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 0.47 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 9.494904E+00 | grad norm: 1.238 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.983 | TFLOPs: 16.16 | |
|
0: iteration 40/ 762 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 0.47 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 8.996175E+00 | grad norm: 1.207 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.361 | TFLOPs: 16.11 | |
|
0: iteration 50/ 762 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 0.47 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 8.595522E+00 | grad norm: 1.171 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.873 | TFLOPs: 16.15 | |
|
0: iteration 60/ 762 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 0.47 | learning rate: 1.979E-04 | global batch size: 256 | lm loss: 8.256769E+00 | grad norm: 1.120 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.555 | TFLOPs: 16.14 | |
|
0: iteration 70/ 762 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 0.47 | learning rate: 1.970E-04 | global batch size: 256 | lm loss: 7.974753E+00 | grad norm: 1.029 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.449 | TFLOPs: 16.14 | |
|
0: iteration 80/ 762 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 0.47 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 7.762483E+00 | grad norm: 0.871 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.769 | TFLOPs: 16.15 | |
|
0: iteration 90/ 762 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 0.47 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 7.601062E+00 | grad norm: 0.677 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.280 | TFLOPs: 16.14 | |
|
0: iteration 100/ 762 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 0.47 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 7.462014E+00 | grad norm: 0.501 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 542.328 | TFLOPs: 16.14 | |
|
0: iteration 110/ 762 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 0.47 | learning rate: 1.920E-04 | global batch size: 256 | lm loss: 7.354394E+00 | grad norm: 0.557 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.922 | TFLOPs: 16.13 | |
|
0: iteration 120/ 762 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 0.47 | learning rate: 1.903E-04 | global batch size: 256 | lm loss: 7.264880E+00 | grad norm: 0.365 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.234 | TFLOPs: 16.11 | |
|
0: iteration 130/ 762 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 0.47 | learning rate: 1.886E-04 | global batch size: 256 | lm loss: 7.188472E+00 | grad norm: 0.769 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.257 | TFLOPs: 16.11 | |
|
0: iteration 140/ 762 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 0.47 | learning rate: 1.867E-04 | global batch size: 256 | lm loss: 7.122352E+00 | grad norm: 0.407 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 541.103 | TFLOPs: 16.10 | |
|
0: iteration 150/ 762 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 0.47 | learning rate: 1.847E-04 | global batch size: 256 | lm loss: 7.034540E+00 | grad norm: 0.289 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.904 | TFLOPs: 16.10 | |
|
0: iteration 160/ 762 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 0.47 | learning rate: 1.825E-04 | global batch size: 256 | lm loss: 6.989355E+00 | grad norm: 0.299 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.716 | TFLOPs: 16.09 | |
|
0: iteration 170/ 762 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 0.47 | learning rate: 1.802E-04 | global batch size: 256 | lm loss: 6.930446E+00 | grad norm: 0.715 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.261 | TFLOPs: 16.08 | |
|
0: iteration 180/ 762 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 0.47 | learning rate: 1.778E-04 | global batch size: 256 | lm loss: 6.885769E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.145 | TFLOPs: 16.07 | |
|
0: iteration 190/ 762 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 0.47 | learning rate: 1.753E-04 | global batch size: 256 | lm loss: 6.842728E+00 | grad norm: 0.381 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 540.206 | TFLOPs: 16.07 | |
|
0: iteration 200/ 762 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 0.47 | learning rate: 1.727E-04 | global batch size: 256 | lm loss: 6.807191E+00 | grad norm: 0.362 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.872 | TFLOPs: 16.06 | |
|
0: iteration 210/ 762 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (s): 0.47 | learning rate: 1.700E-04 | global batch size: 256 | lm loss: 6.770544E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.910 | TFLOPs: 16.07 | |
|
0: iteration 220/ 762 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (s): 0.47 | learning rate: 1.671E-04 | global batch size: 256 | lm loss: 6.739288E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.810 | TFLOPs: 16.06 | |
|
0: iteration 230/ 762 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (s): 0.47 | learning rate: 1.642E-04 | global batch size: 256 | lm loss: 6.705843E+00 | grad norm: 0.269 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.936 | TFLOPs: 16.07 | |
|
0: iteration 240/ 762 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (s): 0.47 | learning rate: 1.611E-04 | global batch size: 256 | lm loss: 6.686831E+00 | grad norm: 0.320 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.539 | TFLOPs: 16.05 | |
|
0: iteration 250/ 762 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (s): 0.47 | learning rate: 1.580E-04 | global batch size: 256 | lm loss: 6.666966E+00 | grad norm: 0.823 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.613 | TFLOPs: 16.06 | |
|
0: iteration 260/ 762 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (s): 0.47 | learning rate: 1.548E-04 | global batch size: 256 | lm loss: 6.624731E+00 | grad norm: 0.583 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.488 | TFLOPs: 16.05 | |
|
0: iteration 270/ 762 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (s): 0.47 | learning rate: 1.515E-04 | global batch size: 256 | lm loss: 6.607779E+00 | grad norm: 0.377 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.421 | TFLOPs: 16.05 | |
|
0: iteration 280/ 762 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (s): 0.47 | learning rate: 1.482E-04 | global batch size: 256 | lm loss: 6.584142E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.390 | TFLOPs: 16.05 | |
|
0: iteration 290/ 762 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (s): 0.47 | learning rate: 1.447E-04 | global batch size: 256 | lm loss: 6.567078E+00 | grad norm: 0.539 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.359 | TFLOPs: 16.05 | |
|
0: iteration 300/ 762 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 0.47 | learning rate: 1.413E-04 | global batch size: 256 | lm loss: 6.562552E+00 | grad norm: 0.358 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.196 | TFLOPs: 16.04 | |
|
0: iteration 310/ 762 | consumed samples: 79360 | consumed tokens: 162529280 | elapsed time per iteration (s): 0.48 | learning rate: 1.377E-04 | global batch size: 256 | lm loss: 6.553120E+00 | grad norm: 0.317 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.939 | TFLOPs: 16.04 | |
|
0: iteration 320/ 762 | consumed samples: 81920 | consumed tokens: 167772160 | elapsed time per iteration (s): 0.47 | learning rate: 1.341E-04 | global batch size: 256 | lm loss: 6.535150E+00 | grad norm: 0.206 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 539.030 | TFLOPs: 16.04 | |
|
0: iteration 330/ 762 | consumed samples: 84480 | consumed tokens: 173015040 | elapsed time per iteration (s): 0.48 | learning rate: 1.305E-04 | global batch size: 256 | lm loss: 6.514146E+00 | grad norm: 0.316 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.774 | TFLOPs: 16.03 | |
|
0: iteration 340/ 762 | consumed samples: 87040 | consumed tokens: 178257920 | elapsed time per iteration (s): 0.48 | learning rate: 1.269E-04 | global batch size: 256 | lm loss: 6.501083E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.669 | TFLOPs: 16.03 | |
|
0: iteration 350/ 762 | consumed samples: 89600 | consumed tokens: 183500800 | elapsed time per iteration (s): 0.48 | learning rate: 1.232E-04 | global batch size: 256 | lm loss: 6.489027E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.630 | TFLOPs: 16.03 | |
|
0: iteration 360/ 762 | consumed samples: 92160 | consumed tokens: 188743680 | elapsed time per iteration (s): 0.48 | learning rate: 1.194E-04 | global batch size: 256 | lm loss: 6.469370E+00 | grad norm: 0.372 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.671 | TFLOPs: 16.03 | |
|
0: iteration 370/ 762 | consumed samples: 94720 | consumed tokens: 193986560 | elapsed time per iteration (s): 0.48 | learning rate: 1.157E-04 | global batch size: 256 | lm loss: 6.475340E+00 | grad norm: 0.430 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.886 | TFLOPs: 16.04 | |
|
0: iteration 380/ 762 | consumed samples: 97280 | consumed tokens: 199229440 | elapsed time per iteration (s): 0.48 | learning rate: 1.120E-04 | global batch size: 256 | lm loss: 6.449596E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.791 | TFLOPs: 16.03 | |
|
0: iteration 390/ 762 | consumed samples: 99840 | consumed tokens: 204472320 | elapsed time per iteration (s): 0.48 | learning rate: 1.082E-04 | global batch size: 256 | lm loss: 6.435555E+00 | grad norm: 0.226 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.826 | TFLOPs: 16.03 | |
|
0: iteration 400/ 762 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 0.48 | learning rate: 1.045E-04 | global batch size: 256 | lm loss: 6.432314E+00 | grad norm: 0.329 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.433 | TFLOPs: 16.02 | |
|
0: iteration 410/ 762 | consumed samples: 104960 | consumed tokens: 214958080 | elapsed time per iteration (s): 0.48 | learning rate: 1.008E-04 | global batch size: 256 | lm loss: 6.421754E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.692 | TFLOPs: 16.03 | |
|
0: iteration 420/ 762 | consumed samples: 107520 | consumed tokens: 220200960 | elapsed time per iteration (s): 0.48 | learning rate: 9.705E-05 | global batch size: 256 | lm loss: 6.429256E+00 | grad norm: 0.380 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.636 | TFLOPs: 16.03 | |
|
0: iteration 430/ 762 | consumed samples: 110080 | consumed tokens: 225443840 | elapsed time per iteration (s): 0.48 | learning rate: 9.336E-05 | global batch size: 256 | lm loss: 6.417949E+00 | grad norm: 0.431 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.431 | TFLOPs: 16.02 | |
|
0: iteration 440/ 762 | consumed samples: 112640 | consumed tokens: 230686720 | elapsed time per iteration (s): 0.48 | learning rate: 8.969E-05 | global batch size: 256 | lm loss: 6.409863E+00 | grad norm: 0.213 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.549 | TFLOPs: 16.03 | |
|
0: iteration 450/ 762 | consumed samples: 115200 | consumed tokens: 235929600 | elapsed time per iteration (s): 0.48 | learning rate: 8.607E-05 | global batch size: 256 | lm loss: 6.397947E+00 | grad norm: 0.349 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.415 | TFLOPs: 16.02 | |
|
0: iteration 460/ 762 | consumed samples: 117760 | consumed tokens: 241172480 | elapsed time per iteration (s): 0.48 | learning rate: 8.248E-05 | global batch size: 256 | lm loss: 6.390238E+00 | grad norm: 0.224 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.530 | TFLOPs: 16.02 | |
|
0: iteration 470/ 762 | consumed samples: 120320 | consumed tokens: 246415360 | elapsed time per iteration (s): 0.48 | learning rate: 7.894E-05 | global batch size: 256 | lm loss: 6.382044E+00 | grad norm: 0.236 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.441 | TFLOPs: 16.02 | |
|
0: iteration 480/ 762 | consumed samples: 122880 | consumed tokens: 251658240 | elapsed time per iteration (s): 0.48 | learning rate: 7.545E-05 | global batch size: 256 | lm loss: 6.380611E+00 | grad norm: 0.229 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.596 | TFLOPs: 16.03 | |
|
0: iteration 490/ 762 | consumed samples: 125440 | consumed tokens: 256901120 | elapsed time per iteration (s): 0.48 | learning rate: 7.203E-05 | global batch size: 256 | lm loss: 6.374780E+00 | grad norm: 0.195 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.362 | TFLOPs: 16.02 | |
|
0: iteration 500/ 762 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 0.48 | learning rate: 6.867E-05 | global batch size: 256 | lm loss: 6.364036E+00 | grad norm: 0.226 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.366 | TFLOPs: 16.02 | |
|
0: iteration 510/ 762 | consumed samples: 130560 | consumed tokens: 267386880 | elapsed time per iteration (s): 0.48 | learning rate: 6.538E-05 | global batch size: 256 | lm loss: 6.350660E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.078 | TFLOPs: 16.01 | |
|
0: iteration 520/ 762 | consumed samples: 133120 | consumed tokens: 272629760 | elapsed time per iteration (s): 0.48 | learning rate: 6.217E-05 | global batch size: 256 | lm loss: 6.343756E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.315 | TFLOPs: 16.02 | |
|
0: iteration 530/ 762 | consumed samples: 135680 | consumed tokens: 277872640 | elapsed time per iteration (s): 0.48 | learning rate: 5.904E-05 | global batch size: 256 | lm loss: 6.350199E+00 | grad norm: 0.240 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.223 | TFLOPs: 16.02 | |
|
0: iteration 540/ 762 | consumed samples: 138240 | consumed tokens: 283115520 | elapsed time per iteration (s): 0.48 | learning rate: 5.600E-05 | global batch size: 256 | lm loss: 6.352103E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.137 | TFLOPs: 16.01 | |
|
0: iteration 550/ 762 | consumed samples: 140800 | consumed tokens: 288358400 | elapsed time per iteration (s): 0.48 | learning rate: 5.305E-05 | global batch size: 256 | lm loss: 6.339139E+00 | grad norm: 0.197 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.368 | TFLOPs: 16.02 | |
|
0: iteration 560/ 762 | consumed samples: 143360 | consumed tokens: 293601280 | elapsed time per iteration (s): 0.48 | learning rate: 5.020E-05 | global batch size: 256 | lm loss: 6.343401E+00 | grad norm: 0.232 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.296 | TFLOPs: 16.02 | |
|
0: iteration 570/ 762 | consumed samples: 145920 | consumed tokens: 298844160 | elapsed time per iteration (s): 0.48 | learning rate: 4.746E-05 | global batch size: 256 | lm loss: 6.333387E+00 | grad norm: 0.186 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.089 | TFLOPs: 16.01 | |
|
0: iteration 580/ 762 | consumed samples: 148480 | consumed tokens: 304087040 | elapsed time per iteration (s): 0.48 | learning rate: 4.482E-05 | global batch size: 256 | lm loss: 6.321525E+00 | grad norm: 0.189 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.255 | TFLOPs: 16.02 | |
|
0: iteration 590/ 762 | consumed samples: 151040 | consumed tokens: 309329920 | elapsed time per iteration (s): 0.48 | learning rate: 4.230E-05 | global batch size: 256 | lm loss: 6.331749E+00 | grad norm: 0.209 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.297 | TFLOPs: 16.02 | |
|
0: iteration 600/ 762 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 0.48 | learning rate: 3.989E-05 | global batch size: 256 | lm loss: 6.320105E+00 | grad norm: 0.295 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.059 | TFLOPs: 16.01 | |
|
0: iteration 610/ 762 | consumed samples: 156160 | consumed tokens: 319815680 | elapsed time per iteration (s): 0.48 | learning rate: 3.760E-05 | global batch size: 256 | lm loss: 6.336096E+00 | grad norm: 0.176 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.082 | TFLOPs: 16.01 | |
|
0: iteration 620/ 762 | consumed samples: 158720 | consumed tokens: 325058560 | elapsed time per iteration (s): 0.48 | learning rate: 3.544E-05 | global batch size: 256 | lm loss: 6.320390E+00 | grad norm: 0.174 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.062 | TFLOPs: 16.01 | |
|
0: iteration 630/ 762 | consumed samples: 161280 | consumed tokens: 330301440 | elapsed time per iteration (s): 0.48 | learning rate: 3.341E-05 | global batch size: 256 | lm loss: 6.331472E+00 | grad norm: 0.225 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.228 | TFLOPs: 16.02 | |
|
0: iteration 640/ 762 | consumed samples: 163840 | consumed tokens: 335544320 | elapsed time per iteration (s): 0.48 | learning rate: 3.151E-05 | global batch size: 256 | lm loss: 6.302699E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.062 | TFLOPs: 16.01 | |
|
0: iteration 650/ 762 | consumed samples: 166400 | consumed tokens: 340787200 | elapsed time per iteration (s): 0.48 | learning rate: 2.975E-05 | global batch size: 256 | lm loss: 6.316518E+00 | grad norm: 0.215 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.175 | TFLOPs: 16.01 | |
|
0: iteration 660/ 762 | consumed samples: 168960 | consumed tokens: 346030080 | elapsed time per iteration (s): 0.48 | learning rate: 2.812E-05 | global batch size: 256 | lm loss: 6.302271E+00 | grad norm: 0.166 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.983 | TFLOPs: 16.01 | |
|
0: iteration 670/ 762 | consumed samples: 171520 | consumed tokens: 351272960 | elapsed time per iteration (s): 0.48 | learning rate: 2.664E-05 | global batch size: 256 | lm loss: 6.307047E+00 | grad norm: 0.173 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.972 | TFLOPs: 16.01 | |
|
0: iteration 680/ 762 | consumed samples: 174080 | consumed tokens: 356515840 | elapsed time per iteration (s): 0.48 | learning rate: 2.530E-05 | global batch size: 256 | lm loss: 6.308545E+00 | grad norm: 0.185 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.056 | TFLOPs: 16.01 | |
|
0: iteration 690/ 762 | consumed samples: 176640 | consumed tokens: 361758720 | elapsed time per iteration (s): 0.48 | learning rate: 2.411E-05 | global batch size: 256 | lm loss: 6.287326E+00 | grad norm: 0.187 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.159 | TFLOPs: 16.01 | |
|
0: iteration 700/ 762 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 0.48 | learning rate: 2.307E-05 | global batch size: 256 | lm loss: 6.303761E+00 | grad norm: 0.205 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.083 | TFLOPs: 16.01 | |
|
0: iteration 710/ 762 | consumed samples: 181760 | consumed tokens: 372244480 | elapsed time per iteration (s): 0.48 | learning rate: 2.217E-05 | global batch size: 256 | lm loss: 6.306910E+00 | grad norm: 0.159 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.039 | TFLOPs: 16.01 | |
|
0: iteration 720/ 762 | consumed samples: 184320 | consumed tokens: 377487360 | elapsed time per iteration (s): 0.48 | learning rate: 2.143E-05 | global batch size: 256 | lm loss: 6.284398E+00 | grad norm: 0.187 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.915 | TFLOPs: 16.01 | |
|
0: iteration 730/ 762 | consumed samples: 186880 | consumed tokens: 382730240 | elapsed time per iteration (s): 0.48 | learning rate: 2.084E-05 | global batch size: 256 | lm loss: 6.303773E+00 | grad norm: 0.209 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 538.086 | TFLOPs: 16.01 | |
|
0: iteration 740/ 762 | consumed samples: 189440 | consumed tokens: 387973120 | elapsed time per iteration (s): 0.48 | learning rate: 2.041E-05 | global batch size: 256 | lm loss: 6.302274E+00 | grad norm: 0.254 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.945 | TFLOPs: 16.01 | |
|
0: iteration 750/ 762 | consumed samples: 192000 | consumed tokens: 393216000 | elapsed time per iteration (s): 0.48 | learning rate: 2.013E-05 | global batch size: 256 | lm loss: 6.297002E+00 | grad norm: 0.174 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.779 | TFLOPs: 16.00 | |
|
0: iteration 760/ 762 | consumed samples: 194560 | consumed tokens: 398458880 | elapsed time per iteration (s): 0.48 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 6.282113E+00 | grad norm: 0.193 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 537.976 | TFLOPs: 16.01 | |
|
0: [after training is done] datetime: 2023-04-27 15:45:27 |
|
0: saving checkpoint at iteration 762 to checkpoints_14m400m100m |
|
0: ----------------------------------------------------------------------------------------------------------------- |
|
0: validation loss at the end of training for val data | lm loss value: 6.262801E+00 | lm loss PPL: 5.246866E+02 | |
|
0: ----------------------------------------------------------------------------------------------------------------- |
|
0: [2023-04-27 15:45:27,500] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step762 is begin to save! |
|
0: [2023-04-27 15:45:27,620] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/layer_01-model_00-model_states.pt... |
|
0: [2023-04-27 15:45:27,665] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/layer_01-model_00-model_states.pt. |
|
0: [2023-04-27 15:45:27,665] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/layer_03-model_00-model_states.pt... |
|
0: [2023-04-27 15:45:27,668] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/layer_03-model_00-model_states.pt. |
|
0: [2023-04-27 15:45:27,668] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/layer_04-model_00-model_states.pt... |
|
0: [2023-04-27 15:45:27,672] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/layer_04-model_00-model_states.pt. |
|
0: [2023-04-27 15:45:27,672] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/layer_05-model_00-model_states.pt... |
|
0: [2023-04-27 15:45:27,675] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/layer_05-model_00-model_states.pt. |
|
0: [2023-04-27 15:45:27,675] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/layer_06-model_00-model_states.pt... |
|
0: [2023-04-27 15:45:27,678] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/layer_06-model_00-model_states.pt. |
|
0: [2023-04-27 15:45:27,678] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/layer_08-model_00-model_states.pt... |
|
0: [2023-04-27 15:45:27,679] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/layer_08-model_00-model_states.pt. |
|
0: [2023-04-27 15:45:27,679] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_14m400m100m/global_step762/mp_rank_00_model_states.pt |
|
0: [2023-04-27 15:45:27,679] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/mp_rank_00_model_states.pt... |
|
0: [2023-04-27 15:45:27,681] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/mp_rank_00_model_states.pt. |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,685] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... |
|
0: [2023-04-27 15:45:27,711] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,711] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,711] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,714] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,714] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,714] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,714] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,714] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,714] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,714] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,714] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,714] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,731] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,731] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,731] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,731] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,731] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,731] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. |
|
0: [2023-04-27 15:45:27,731] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,731] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,731] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,732] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: [2023-04-27 15:45:27,760] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_14m400m100m/global_step762/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt |
|
0: [2023-04-27 15:45:27,760] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step762 is ready now! |
|
0: successfully saved checkpoint at iteration 762 to checkpoints_14m400m100m |
|
END 3423734: Thu 27 Apr 2023 03:45:34 PM EEST |
|
|