followir_advanced_1 / running_log.txt
aarontrinh02's picture
Upload folder using huggingface_hub
1d1d726 verified
[WARNING|2025-01-21 12:42:54] logging.py:162 >> `ddp_find_unused_parameters` needs to be set as False for LoRA in DDP training.
[INFO|2025-01-21 12:42:54] parser.py:359 >> Process rank: 0, device: cuda:0, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-01-21 12:42:54] configuration_utils.py:679 >> loading configuration file config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/config.json
[INFO|2025-01-21 12:42:54] configuration_utils.py:746 >> Model config MistralConfig {
"_name_or_path": "mistralai/Mistral-7B-Instruct-v0.2",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.46.1",
"use_cache": true,
"vocab_size": 32000
}
[INFO|2025-01-21 12:42:54] tokenization_utils_base.py:2211 >> loading file tokenizer.model from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/tokenizer.model
[INFO|2025-01-21 12:42:54] parser.py:359 >> Process rank: 1, device: cuda:1, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-01-21 12:42:54] parser.py:359 >> Process rank: 4, device: cuda:4, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-01-21 12:42:54] parser.py:359 >> Process rank: 5, device: cuda:5, n_gpu: 1, distributed training: True, compute dtype: torch.bfloat16
[INFO|2025-01-21 12:42:54] tokenization_utils_base.py:2211 >> loading file tokenizer.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/tokenizer.json
[INFO|2025-01-21 12:42:54] tokenization_utils_base.py:2211 >> loading file added_tokens.json from cache at None
[INFO|2025-01-21 12:42:54] tokenization_utils_base.py:2211 >> loading file special_tokens_map.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/special_tokens_map.json
[INFO|2025-01-21 12:42:54] tokenization_utils_base.py:2211 >> loading file tokenizer_config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/tokenizer_config.json
[INFO|2025-01-21 12:42:55] configuration_utils.py:679 >> loading configuration file config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/config.json
[INFO|2025-01-21 12:42:55] configuration_utils.py:746 >> Model config MistralConfig {
"_name_or_path": "mistralai/Mistral-7B-Instruct-v0.2",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.46.1",
"use_cache": true,
"vocab_size": 32000
}
[INFO|2025-01-21 12:42:55] tokenization_utils_base.py:2211 >> loading file tokenizer.model from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/tokenizer.model
[INFO|2025-01-21 12:42:55] tokenization_utils_base.py:2211 >> loading file tokenizer.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/tokenizer.json
[INFO|2025-01-21 12:42:55] tokenization_utils_base.py:2211 >> loading file added_tokens.json from cache at None
[INFO|2025-01-21 12:42:55] tokenization_utils_base.py:2211 >> loading file special_tokens_map.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/special_tokens_map.json
[INFO|2025-01-21 12:42:55] tokenization_utils_base.py:2211 >> loading file tokenizer_config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/tokenizer_config.json
[INFO|2025-01-21 12:42:55] logging.py:157 >> Add pad token: </s>
[INFO|2025-01-21 12:42:55] logging.py:157 >> Loading dataset followir_advanced_1.json...
[INFO|2025-01-21 12:42:58] configuration_utils.py:679 >> loading configuration file config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/config.json
[INFO|2025-01-21 12:42:58] configuration_utils.py:746 >> Model config MistralConfig {
"_name_or_path": "mistralai/Mistral-7B-Instruct-v0.2",
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.46.1",
"use_cache": true,
"vocab_size": 32000
}
[INFO|2025-01-21 12:42:58] modeling_utils.py:3937 >> loading weights file model.safetensors from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/model.safetensors.index.json
[INFO|2025-01-21 12:42:58] modeling_utils.py:1670 >> Instantiating MistralForCausalLM model under default dtype torch.bfloat16.
[INFO|2025-01-21 12:42:58] configuration_utils.py:1096 >> Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
[INFO|2025-01-21 12:43:04] modeling_utils.py:4800 >> All model checkpoint weights were used when initializing MistralForCausalLM.
[INFO|2025-01-21 12:43:04] modeling_utils.py:4808 >> All the weights of MistralForCausalLM were initialized from the model checkpoint at mistralai/Mistral-7B-Instruct-v0.2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use MistralForCausalLM for predictions without further training.
[INFO|2025-01-21 12:43:04] configuration_utils.py:1051 >> loading configuration file generation_config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/generation_config.json
[INFO|2025-01-21 12:43:04] configuration_utils.py:1096 >> Generate config GenerationConfig {
"bos_token_id": 1,
"eos_token_id": 2
}
[INFO|2025-01-21 12:43:04] logging.py:157 >> Gradient checkpointing enabled.
[INFO|2025-01-21 12:43:04] logging.py:157 >> Using torch SDPA for faster training and inference.
[INFO|2025-01-21 12:43:04] logging.py:157 >> Upcasting trainable params to float32.
[INFO|2025-01-21 12:43:04] logging.py:157 >> Fine-tuning method: LoRA
[INFO|2025-01-21 12:43:05] logging.py:157 >> trainable params: 6,815,744 || all params: 7,248,547,840 || trainable%: 0.0940
[INFO|2025-01-21 12:43:05] trainer.py:698 >> Using auto half precision backend
[INFO|2025-01-21 12:43:06] trainer.py:2313 >> ***** Running training *****
[INFO|2025-01-21 12:43:06] trainer.py:2314 >> Num examples = 1,776
[INFO|2025-01-21 12:43:06] trainer.py:2315 >> Num Epochs = 8
[INFO|2025-01-21 12:43:06] trainer.py:2316 >> Instantaneous batch size per device = 4
[INFO|2025-01-21 12:43:06] trainer.py:2319 >> Total train batch size (w. parallel, distributed & accumulation) = 768
[INFO|2025-01-21 12:43:06] trainer.py:2320 >> Gradient Accumulation steps = 32
[INFO|2025-01-21 12:43:06] trainer.py:2321 >> Total optimization steps = 16
[INFO|2025-01-21 12:43:06] trainer.py:2322 >> Number of trainable parameters = 6,815,744
[INFO|2025-01-21 13:40:59] logging.py:157 >> {'loss': 5.5456, 'learning_rate': 2.3334e-05, 'epoch': 2.16}
[INFO|2025-01-21 14:38:30] logging.py:157 >> {'loss': 1.1889, 'learning_rate': 9.2597e-06, 'epoch': 4.32}
[INFO|2025-01-21 15:36:25] logging.py:157 >> {'loss': 0.3610, 'learning_rate': 2.8822e-07, 'epoch': 6.49}
[INFO|2025-01-21 15:47:57] trainer.py:3801 >> Saving model checkpoint to saves/Mistral-7B-Instruct-v0.2/lora/train_2025-01-21-12-40-17/checkpoint-16
[INFO|2025-01-21 15:47:57] configuration_utils.py:679 >> loading configuration file config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/config.json
[INFO|2025-01-21 15:47:57] configuration_utils.py:746 >> Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.46.1",
"use_cache": true,
"vocab_size": 32000
}
[INFO|2025-01-21 15:47:57] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Mistral-7B-Instruct-v0.2/lora/train_2025-01-21-12-40-17/checkpoint-16/tokenizer_config.json
[INFO|2025-01-21 15:47:57] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Mistral-7B-Instruct-v0.2/lora/train_2025-01-21-12-40-17/checkpoint-16/special_tokens_map.json
[INFO|2025-01-21 15:47:58] trainer.py:2584 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|2025-01-21 15:47:58] trainer.py:3801 >> Saving model checkpoint to saves/Mistral-7B-Instruct-v0.2/lora/train_2025-01-21-12-40-17
[INFO|2025-01-21 15:47:58] configuration_utils.py:679 >> loading configuration file config.json from cache at /nethome/atrinh31/.cache/huggingface/hub/models--mistralai--Mistral-7B-Instruct-v0.2/snapshots/3ad372fc79158a2148299e3318516c786aeded6c/config.json
[INFO|2025-01-21 15:47:58] configuration_utils.py:746 >> Model config MistralConfig {
"architectures": [
"MistralForCausalLM"
],
"attention_dropout": 0.0,
"bos_token_id": 1,
"eos_token_id": 2,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 32768,
"model_type": "mistral",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"rms_norm_eps": 1e-05,
"rope_theta": 1000000.0,
"sliding_window": null,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.46.1",
"use_cache": true,
"vocab_size": 32000
}
[INFO|2025-01-21 15:47:58] tokenization_utils_base.py:2646 >> tokenizer config file saved in saves/Mistral-7B-Instruct-v0.2/lora/train_2025-01-21-12-40-17/tokenizer_config.json
[INFO|2025-01-21 15:47:58] tokenization_utils_base.py:2655 >> Special tokens file saved in saves/Mistral-7B-Instruct-v0.2/lora/train_2025-01-21-12-40-17/special_tokens_map.json
[WARNING|2025-01-21 15:47:58] logging.py:162 >> No metric eval_loss to plot.
[WARNING|2025-01-21 15:47:58] logging.py:162 >> No metric eval_accuracy to plot.
[INFO|2025-01-21 15:47:58] modelcard.py:449 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}