2024-03-06 12:17:28.140 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-17-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:17:28.150 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-17-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:17:28.150 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-17-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:17:28.199 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:17:28.199 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:17:28.199 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:25:27.169 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-25-27_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:25:27.170 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:25:27.173 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-25-27_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:25:27.174 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-25-27_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:25:27.181 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-25-27_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:25:27.183 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:25:27.183 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:25:27.183 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:32:56.084 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-32-56_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:32:56.086 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:32:56.087 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-32-56_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:32:56.088 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:32:56.096 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-32-56_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:32:56.096 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-32-56_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:32:56.098 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:32:56.098 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:45:07.048 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-45-07_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:45:07.050 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:45:07.056 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-45-07_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:45:07.056 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-45-07_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:45:07.056 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_12-45-07_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 12:45:07.057 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:45:07.057 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 12:45:07.058 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:10:28.241 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-10-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:10:28.242 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:10:28.243 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-10-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:10:28.245 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:10:28.246 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-10-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:10:28.247 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:10:28.248 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-10-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:10:28.249 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:12:34.796 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-12-34_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:12:34.798 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:12:34.809 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-12-34_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:12:34.809 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-12-34_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:12:34.811 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:12:34.811 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-12-34_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:12:34.811 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:12:34.813 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:17:44.571 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-17-44_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:17:44.573 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:17:44.580 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-17-44_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:17:44.580 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-17-44_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:17:44.581 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-17-44_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:17:44.581 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:17:44.582 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:17:44.583 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:19:29.951 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-19-29_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:19:29.953 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:19:29.960 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-19-29_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:19:29.961 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-19-29_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:19:29.962 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:19:29.963 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:19:29.965 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-19-29_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:19:29.967 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:19:30.387 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:19:30.388 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:19:30.388 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:19:30.389 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:19:30.389 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:19:30.389 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:19:30.391 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:19:30.391 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:19:30.391 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:19:30.392 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:19:30.393 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:19:30.393 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:23:03.589 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-23-03_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:23:03.589 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-23-03_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:23:03.589 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-23-03_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:23:03.600 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-23-03_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:23:03.641 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:23:03.641 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:23:03.641 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:23:03.641 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:23:04.430 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:23:04.430 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:23:04.431 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:23:04.437 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:23:04.437 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:23:04.437 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:23:04.438 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:23:04.438 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:23:04.439 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:23:04.443 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:23:04.443 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:23:04.443 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:31:12.733 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-31-12_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:31:12.733 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-31-12_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:31:12.733 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-31-12_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:31:12.743 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-31-12_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:31:12.768 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:31:12.768 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:31:12.768 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:31:12.768 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:31:13.284 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:31:13.284 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:31:13.284 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:31:13.284 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:31:13.285 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:31:13.285 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:31:13.288 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:31:13.288 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:31:13.288 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:31:13.294 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:31:13.294 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:31:13.294 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:50:28.919 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_13-50-28_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 13:50:28.944 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 13:50:29.442 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 13:50:29.442 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 13:50:29.442 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 13:52:59.674 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['up_proj', 'W_pack', 'o_proj', 'gate_proj', 'down_proj'] 2024-03-06 13:55:12.061 | INFO | __main__:load_model:283 - memory footprint of model: 11.499347686767578 GB 2024-03-06 13:55:12.072 | INFO | __main__:load_model:295 - Total model params: 7815.26M 2024-03-06 13:55:12.073 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 13:55:12.073 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 13:55:12.073 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 13:55:12.170 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 13:55:12.170 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 15:36:52.894 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_15-36-52_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 15:36:52.902 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 15:36:53.359 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 15:36:53.360 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 15:36:53.360 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 15:39:10.814 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['up_proj', 'o_proj', 'W_pack', 'gate_proj', 'down_proj'] 2024-03-06 15:41:23.526 | INFO | __main__:load_model:283 - memory footprint of model: 11.499347686767578 GB 2024-03-06 15:41:23.538 | INFO | __main__:load_model:295 - Total model params: 7815.26M 2024-03-06 15:41:23.538 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 15:41:23.538 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 15:41:23.538 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 15:41:23.635 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 15:41:23.635 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 16:02:53.895 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-02-53_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:02:53.910 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:02:54.355 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:02:54.355 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:02:54.356 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:05:08.407 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['up_proj', 'down_proj', 'gate_proj', 'W_pack', 'o_proj'] 2024-03-06 16:07:21.360 | INFO | __main__:load_model:283 - memory footprint of model: 11.499347686767578 GB 2024-03-06 16:07:21.372 | INFO | __main__:load_model:295 - Total model params: 7815.26M 2024-03-06 16:07:21.372 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 16:07:21.372 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 16:07:21.372 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 16:07:21.448 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 16:07:21.449 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 16:07:21.667 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 16:15:19.800 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-15-19_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:15:19.810 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:15:20.359 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:15:20.360 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:15:20.360 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:17:39.499 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['down_proj', 'gate_proj', 'W_pack', 'up_proj', 'o_proj'] 2024-03-06 16:19:51.334 | INFO | __main__:load_model:283 - memory footprint of model: 11.499347686767578 GB 2024-03-06 16:19:51.345 | INFO | __main__:load_model:295 - Total model params: 7815.26M 2024-03-06 16:19:51.345 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 16:19:51.345 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 16:19:51.345 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 16:19:51.520 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 16:19:51.521 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 16:19:51.693 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 16:27:57.389 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-27-57_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:27:57.395 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:27:57.855 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:27:57.855 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:27:57.855 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:30:23.733 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['W_pack', 'up_proj', 'o_proj', 'gate_proj', 'down_proj'] 2024-03-06 16:32:35.436 | INFO | __main__:load_model:283 - memory footprint of model: 11.499347686767578 GB 2024-03-06 16:32:35.447 | INFO | __main__:load_model:295 - Total model params: 7815.26M 2024-03-06 16:32:35.448 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 16:32:35.448 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 16:32:35.448 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 16:32:35.524 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 16:32:35.524 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 16:32:35.559 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 16:34:24.864 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-34-24_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:34:24.867 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-34-24_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:34:24.869 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-34-24_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:34:24.870 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-34-24_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:34:24.876 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:34:24.876 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:34:24.876 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:34:24.876 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:34:25.444 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:34:25.445 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:34:25.445 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:34:25.445 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:34:25.446 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:34:25.446 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:34:25.446 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:34:25.446 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:34:25.446 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:34:25.447 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:34:25.448 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:34:25.448 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:37:25.850 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-37-25_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:37:25.890 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:37:26.425 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:37:26.426 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:37:26.426 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:40:56.142 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['up_proj', 'gate_proj', 'o_proj', 'down_proj', 'W_pack'] 2024-03-06 16:43:05.675 | INFO | __main__:load_model:283 - memory footprint of model: 10.875873565673828 GB 2024-03-06 16:43:05.686 | INFO | __main__:load_model:295 - Total model params: 7647.89M 2024-03-06 16:43:05.687 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 16:43:05.687 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 16:43:05.687 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 16:43:05.879 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 16:43:05.879 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 16:43:05.938 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 16:46:23.963 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-46-23_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:46:23.981 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:46:24.473 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:46:24.473 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:46:24.473 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:49:53.376 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['up_proj', 'gate_proj', 'down_proj', 'W_pack', 'o_proj'] 2024-03-06 16:52:04.003 | INFO | __main__:load_model:283 - memory footprint of model: 10.771961212158203 GB 2024-03-06 16:52:04.028 | INFO | __main__:load_model:295 - Total model params: 7620.00M 2024-03-06 16:52:04.029 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 16:52:04.029 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 16:52:04.030 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 16:52:04.187 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 16:52:04.188 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 16:52:05.224 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 16:55:14.931 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_16-55-14_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 16:55:14.948 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 16:55:15.475 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 16:55:15.476 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 16:55:15.476 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 16:57:51.689 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'gate_proj', 'W_pack', 'down_proj', 'up_proj'] 2024-03-06 17:00:00.799 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 17:00:00.848 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 17:00:00.848 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 17:00:00.848 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 17:00:00.849 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 17:00:01.112 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 17:00:01.112 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 17:00:02.163 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 17:03:22.743 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_17-03-22_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 17:03:22.757 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 17:03:23.280 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 17:03:23.281 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 17:03:23.281 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 17:05:59.052 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['gate_proj', 'down_proj', 'W_pack', 'o_proj', 'up_proj'] 2024-03-06 17:08:08.852 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 17:08:08.864 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 17:08:08.864 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 17:08:08.864 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 17:08:08.864 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 17:08:08.983 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 17:08:08.983 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 17:08:09.858 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 17:13:29.709 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_17-13-29_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 17:13:29.754 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 17:13:30.211 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 17:13:30.212 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 17:13:30.212 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 17:16:25.025 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['W_pack', 'up_proj', 'gate_proj', 'o_proj', 'down_proj'] 2024-03-06 17:18:34.386 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 17:18:34.412 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 17:18:34.413 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 17:18:34.413 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 17:18:34.413 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 17:18:34.583 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 17:18:34.584 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 17:18:35.600 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 17:30:51.904 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_17-30-51_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 17:30:51.906 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 17:30:52.389 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 17:30:52.390 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 17:30:52.390 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 17:33:41.988 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['gate_proj', 'W_pack', 'down_proj', 'o_proj', 'up_proj'] 2024-03-06 17:35:50.901 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 17:35:50.928 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 17:35:50.928 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 17:35:50.928 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 17:35:50.929 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 17:35:51.127 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 17:35:51.128 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 17:35:51.999 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 17:41:19.871 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_17-41-19_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 17:42:56.902 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_17-42-56_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 17:42:56.954 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 17:42:57.467 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 17:42:57.467 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 17:42:57.468 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 17:46:11.699 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['down_proj', 'W_pack', 'gate_proj', 'up_proj', 'o_proj'] 2024-03-06 17:48:21.583 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 17:48:21.611 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 17:48:21.612 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 17:48:21.612 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 17:48:21.612 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 17:48:21.844 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 17:48:21.845 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 17:48:22.555 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 17:52:18.060 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_17-52-18_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 17:52:18.061 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 17:52:18.503 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 17:52:18.503 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 17:52:18.504 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 17:54:55.504 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['down_proj', 'up_proj', 'gate_proj', 'o_proj', 'W_pack'] 2024-03-06 17:57:03.863 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 17:57:03.875 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 17:57:03.875 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 17:57:03.875 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 17:57:03.876 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 17:57:03.989 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 17:57:03.989 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 17:57:04.145 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 18:08:25.434 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_18-08-25_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 18:08:25.435 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 18:08:25.871 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 18:08:25.871 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 18:08:25.871 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 18:11:10.721 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'down_proj', 'up_proj', 'W_pack', 'gate_proj'] 2024-03-06 18:13:19.934 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-06 18:13:19.963 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-06 18:13:19.964 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-06 18:13:19.964 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-06 18:13:19.964 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-06 18:13:20.065 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-06 18:13:20.066 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-06 18:13:20.816 | INFO | __main__:main:387 - *** starting training *** 2024-03-06 18:21:14.358 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_18-21-14_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 18:21:14.361 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_18-21-14_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 18:21:14.361 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_18-21-14_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 18:21:14.364 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar06_18-21-14_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-06 18:21:14.393 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 18:21:14.393 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 18:21:14.393 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 18:21:14.393 | INFO | __main__:init_components:333 - Initializing components... 2024-03-06 18:21:14.919 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 18:21:14.921 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 18:21:14.921 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 18:21:14.921 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 18:21:14.921 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 18:21:14.921 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 18:21:14.922 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 18:21:14.922 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 18:21:14.923 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-06 18:21:14.926 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-06 18:21:14.926 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-06 18:21:14.927 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 02:58:39.260 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_02-58-39_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 02:58:39.265 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=2, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_02-58-39_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 02:58:39.273 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=3, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_02-58-39_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 02:58:39.285 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=1, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=True, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_02-58-39_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 02:58:39.309 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 02:58:39.309 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 02:58:39.309 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 02:58:39.309 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 02:58:39.812 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 02:58:39.813 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 02:58:39.814 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 02:58:39.818 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 02:58:39.819 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 02:58:39.819 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 02:58:39.820 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 02:58:39.820 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 02:58:39.821 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 02:58:39.821 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 02:58:39.821 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 02:58:39.821 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 03:31:34.586 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_03-31-34_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 03:31:34.637 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 03:31:35.144 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 03:31:35.144 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 03:31:35.144 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 03:34:13.120 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'down_proj', 'gate_proj', 'up_proj', 'W_pack'] 2024-03-07 03:36:24.122 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-07 03:36:24.143 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-07 03:36:24.143 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-07 03:36:24.143 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-07 03:36:24.144 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 03:36:24.250 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 03:36:24.250 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 03:36:24.475 | INFO | __main__:main:387 - *** starting training *** 2024-03-07 03:49:43.178 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_03-49-43_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 03:49:43.225 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 03:49:43.702 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 03:49:43.702 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 03:49:43.702 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 03:53:15.915 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['W_pack', 'o_proj', 'up_proj', 'down_proj', 'gate_proj'] 2024-03-07 03:55:27.542 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-07 03:55:27.605 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-07 03:55:27.606 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-07 03:55:27.606 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-07 03:55:27.606 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 03:55:27.831 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 03:55:27.831 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 03:55:28.564 | INFO | __main__:main:387 - *** starting training *** 2024-03-07 07:52:18.910 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_07-52-18_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 07:52:19.058 | INFO | __main__:init_components:333 - Initializing components... 2024-03-07 07:52:19.748 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 07:52:19.748 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 07:52:19.748 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 07:55:27.178 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['down_proj', 'W_pack', 'o_proj', 'up_proj', 'gate_proj'] 2024-03-07 07:57:37.681 | INFO | __main__:load_model:283 - memory footprint of model: 10.72000503540039 GB 2024-03-07 07:57:37.694 | INFO | __main__:load_model:295 - Total model params: 7606.05M 2024-03-07 07:57:37.694 | INFO | __main__:init_components:349 - Train model with sft task 2024-03-07 07:57:37.694 | INFO | __main__:load_sft_dataset:315 - Loading data with UnifiedSFTDataset 2024-03-07 07:57:37.694 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 07:57:37.909 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 07:57:37.909 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 07:57:38.206 | INFO | __main__:main:387 - *** starting training *** 2024-03-07 08:49:42.226 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_08-49-42_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 08:49:42.257 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 08:49:42.738 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 08:49:42.738 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 08:49:42.738 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 08:52:19.758 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'up_proj', 'down_proj', 'W_pack', 'gate_proj'] 2024-03-07 08:54:32.900 | INFO | __main__:load_model:284 - memory footprint of model: 10.875873565673828 GB 2024-03-07 08:54:32.913 | INFO | __main__:load_model:296 - Total model params: 7647.89M 2024-03-07 08:54:32.913 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-07 08:54:32.913 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-07 08:54:32.913 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 08:54:33.037 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 08:54:33.037 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 08:54:33.340 | INFO | __main__:main:388 - *** starting training *** 2024-03-07 09:05:50.446 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_09-05-50_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 09:05:50.465 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 09:05:50.907 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 09:05:50.907 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 09:05:50.907 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 09:09:20.856 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_09-09-20_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 09:09:20.870 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 09:09:21.431 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 09:09:21.431 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 09:09:21.432 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 09:13:11.015 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'down_proj', 'W_pack', 'up_proj', 'gate_proj'] 2024-03-07 09:15:23.202 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-07 09:15:23.213 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-07 09:15:23.214 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-07 09:15:23.214 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-07 09:15:23.214 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 09:15:23.367 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 09:15:23.367 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 09:15:23.679 | INFO | __main__:main:388 - *** starting training *** 2024-03-07 10:05:57.292 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_10-05-57_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 10:05:57.302 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 10:05:57.768 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 10:05:57.769 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 10:05:57.770 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 10:08:54.041 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['down_proj', 'o_proj', 'W_pack', 'gate_proj', 'up_proj'] 2024-03-07 10:11:05.358 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-07 10:11:05.385 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-07 10:11:05.390 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-07 10:11:05.391 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-07 10:11:05.391 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 10:11:05.544 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 10:11:05.545 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 10:11:06.795 | INFO | __main__:main:388 - *** starting training *** 2024-03-07 10:34:30.258 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_10-34-30_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 10:34:30.290 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 10:34:30.732 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 10:34:30.732 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 10:34:30.733 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 10:35:46.474 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=1, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_10-35-46_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 10:35:46.476 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 10:35:46.857 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 10:35:46.857 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 10:35:46.857 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 10:36:26.464 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_10-36-26_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 10:36:26.465 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 10:36:26.844 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 10:36:26.844 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 10:36:26.845 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 10:39:05.123 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['up_proj', 'W_pack', 'gate_proj', 'o_proj', 'down_proj'] 2024-03-07 10:41:40.778 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-07 10:41:40.807 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-07 10:41:40.807 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-07 10:41:40.808 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-07 10:41:40.808 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 10:41:41.037 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 10:41:41.037 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 10:41:42.979 | INFO | __main__:main:388 - *** starting training *** 2024-03-07 10:46:02.696 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_10-46-02_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=2, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 10:46:02.706 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 10:46:03.241 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 10:46:03.243 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 10:46:03.244 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 10:48:59.776 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'W_pack', 'up_proj', 'gate_proj', 'down_proj'] 2024-03-07 10:51:13.440 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-07 10:51:13.467 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-07 10:51:13.467 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-07 10:51:13.468 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-07 10:51:13.468 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 10:51:13.576 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 10:51:13.577 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 10:51:14.237 | INFO | __main__:main:388 - *** starting training *** 2024-03-07 10:55:29.646 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar07_10-55-29_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=200, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=500, save_strategy=steps, save_total_limit=1, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-07 10:55:29.660 | INFO | __main__:init_components:334 - Initializing components... 2024-03-07 10:55:30.232 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-07 10:55:30.233 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-07 10:55:30.233 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-07 10:58:47.891 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['W_pack', 'down_proj', 'up_proj', 'o_proj', 'gate_proj'] 2024-03-07 11:00:58.999 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-07 11:00:59.018 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-07 11:00:59.019 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-07 11:00:59.019 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-07 11:00:59.019 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-07 11:00:59.169 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-07 11:00:59.169 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-07 11:01:00.013 | INFO | __main__:main:388 - *** starting training *** 2024-03-08 02:30:06.655 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar08_02-30-06_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=100, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=3, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=100, weight_decay=0, ) 2024-03-08 02:30:06.661 | INFO | __main__:init_components:334 - Initializing components... 2024-03-08 02:30:07.096 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-08 02:30:07.097 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-08 02:30:07.097 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-08 02:32:45.873 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['gate_proj', 'up_proj', 'W_pack', 'down_proj', 'o_proj'] 2024-03-08 02:34:55.868 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-08 02:34:55.879 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-08 02:34:55.880 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-08 02:34:55.880 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-08 02:34:55.881 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-08 02:34:56.008 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-08 02:34:56.008 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-08 02:34:56.038 | INFO | __main__:main:388 - *** starting training *** 2024-03-08 07:54:06.009 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar08_07-54-06_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=10, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=3, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=50, weight_decay=0, ) 2024-03-08 07:54:06.036 | INFO | __main__:init_components:334 - Initializing components... 2024-03-08 07:54:06.447 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-08 07:54:06.448 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-08 07:54:06.448 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-08 07:56:49.939 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'down_proj', 'up_proj', 'gate_proj', 'W_pack'] 2024-03-08 07:59:01.455 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-08 07:59:01.470 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-08 07:59:01.470 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-08 07:59:01.470 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-08 07:59:01.470 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-08 07:59:01.614 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-08 07:59:01.615 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-08 07:59:02.224 | INFO | __main__:main:388 - *** starting training *** 2024-03-09 11:53:09.770 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar09_11-53-09_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=10, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=3, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=50, weight_decay=0, ) 2024-03-09 11:53:09.801 | INFO | __main__:init_components:334 - Initializing components... 2024-03-09 11:53:10.289 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-09 11:53:10.290 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-09 11:53:10.290 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-09 11:55:46.092 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'down_proj', 'W_pack', 'gate_proj', 'up_proj'] 2024-03-09 11:57:56.297 | INFO | __main__:load_model:284 - memory footprint of model: 11.083698272705078 GB 2024-03-09 11:57:56.308 | INFO | __main__:load_model:296 - Total model params: 7703.68M 2024-03-09 11:57:56.309 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-09 11:57:56.309 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-09 11:57:56.309 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-09 11:57:56.424 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-09 11:57:56.425 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-09 11:57:56.469 | INFO | __main__:main:388 - *** starting training *** 2024-03-10 00:56:39.151 | INFO | __main__:setup_everything:52 - train_args:TrainingArguments( _n_gpu=4, accelerator_config={'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True}, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_persistent_workers=False, dataloader_pin_memory=True, dataloader_prefetch_factor=None, ddp_backend=None, ddp_broadcast_buffers=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, dispatch_batches=None, do_eval=False, do_predict=False, do_train=False, eval_accumulation_steps=None, eval_delay=0, eval_steps=None, evaluation_strategy=no, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=[], fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=None, full_determinism=False, gradient_accumulation_steps=16, gradient_checkpointing=True, gradient_checkpointing_kwargs=None, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_always_push=False, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=, ignore_data_skip=False, include_inputs_for_metrics=False, include_num_input_tokens_seen=False, include_tokens_per_second=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=0.0001, length_column_name=length, load_best_model_at_end=False, local_rank=0, log_level=passive, log_level_replica=warning, log_on_each_node=True, logging_dir=output/user-baichuan2-13b-v2-3.6/runs/Mar10_00-56-39_u, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=10, logging_strategy=steps, lr_scheduler_kwargs={}, lr_scheduler_type=constant_with_warmup, max_grad_norm=0.3, max_steps=-1, metric_for_best_model=None, mp_parameters=, neftune_noise_alpha=None, no_cuda=False, num_train_epochs=1, optim=paged_adamw_32bit, optim_args=None, output_dir=output/user-baichuan2-13b-v2-3.6, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=, ray_scope=last, remove_unused_columns=False, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=output/user-baichuan2-13b-v2-3.6, save_on_each_node=False, save_only_model=False, save_safetensors=True, save_steps=100, save_strategy=steps, save_total_limit=3, seed=42, skip_memory_metrics=True, split_batches=None, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_cpu=False, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=50, weight_decay=0, ) 2024-03-10 00:56:39.157 | INFO | __main__:init_components:334 - Initializing components... 2024-03-10 00:56:39.568 | INFO | __main__:load_tokenizer:211 - vocab_size of tokenizer: 125696 2024-03-10 00:56:39.568 | INFO | __main__:load_model:220 - Loading model from base model: /home/jiakangxiang/.cache/modelscope/hub/baichuan-inc/Baichuan2-13B-Chat 2024-03-10 00:56:39.569 | INFO | __main__:load_model:221 - Train model with qlora 2024-03-10 00:59:13.672 | INFO | __main__:find_all_linear_names:85 - LoRA target module names: ['o_proj', 'W_pack', 'down_proj', 'up_proj', 'gate_proj'] 2024-03-10 01:01:23.156 | INFO | __main__:load_model:284 - memory footprint of model: 10.875873565673828 GB 2024-03-10 01:01:23.167 | INFO | __main__:load_model:296 - Total model params: 7647.89M 2024-03-10 01:01:23.168 | INFO | __main__:init_components:350 - Train model with sft task 2024-03-10 01:01:23.168 | INFO | __main__:load_sft_dataset:316 - Loading data with UnifiedSFTDataset 2024-03-10 01:01:23.168 | INFO | component.dataset:__init__:19 - Loading data: ./data/train.jsonl 2024-03-10 01:01:23.306 | INFO | component.dataset:__init__:22 - Use template "baichuan2" for training 2024-03-10 01:01:23.307 | INFO | component.dataset:__init__:23 - There are 7720 data in dataset 2024-03-10 01:01:23.869 | INFO | __main__:main:388 - *** starting training ***