Social-llama / benchmark /running_log.txt
StarThomas1002's picture
Upload folder using huggingface_hub
366396a verified
[INFO|configuration_utils.py:675] 2024-10-25 03:43:47,884 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:43:47,886 >> Model config LlamaConfig {
"_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:47,945 >> loading file tokenizer.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/tokenizer.json
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:47,945 >> loading file tokenizer.model from cache at None
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:47,945 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:47,945 >> loading file special_tokens_map.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/special_tokens_map.json
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:47,945 >> loading file tokenizer_config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/tokenizer_config.json
[INFO|tokenization_utils_base.py:2470] 2024-10-25 03:43:48,495 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|configuration_utils.py:675] 2024-10-25 03:43:48,712 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:43:48,713 >> Model config LlamaConfig {
"_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:48,775 >> loading file tokenizer.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/tokenizer.json
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:48,775 >> loading file tokenizer.model from cache at None
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:48,775 >> loading file added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:48,776 >> loading file special_tokens_map.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/special_tokens_map.json
[INFO|tokenization_utils_base.py:2206] 2024-10-25 03:43:48,776 >> loading file tokenizer_config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/tokenizer_config.json
[INFO|tokenization_utils_base.py:2470] 2024-10-25 03:43:49,388 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
[INFO|configuration_utils.py:675] 2024-10-25 03:46:26,140 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:46:26,142 >> Model config LlamaConfig {
"_name_or_path": "meta-llama/Meta-Llama-3-8B-Instruct",
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|modeling_utils.py:3732] 2024-10-25 03:46:26,215 >> loading weights file model.safetensors from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/model.safetensors.index.json
[INFO|modeling_utils.py:1622] 2024-10-25 03:46:26,217 >> Instantiating LlamaForCausalLM model under default dtype torch.bfloat16.
[INFO|configuration_utils.py:1099] 2024-10-25 03:46:26,219 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"eos_token_id": 128009
}
[INFO|modeling_utils.py:4574] 2024-10-25 03:46:30,956 >> All model checkpoint weights were used when initializing LlamaForCausalLM.
[INFO|modeling_utils.py:4582] 2024-10-25 03:46:30,957 >> All the weights of LlamaForCausalLM were initialized from the model checkpoint at meta-llama/Meta-Llama-3-8B-Instruct.
If your task is similar to the task the model of the checkpoint was trained on, you can already use LlamaForCausalLM for predictions without further training.
[INFO|configuration_utils.py:1054] 2024-10-25 03:46:31,010 >> loading configuration file generation_config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/generation_config.json
[INFO|configuration_utils.py:1099] 2024-10-25 03:46:31,010 >> Generate config GenerationConfig {
"bos_token_id": 128000,
"do_sample": true,
"eos_token_id": [
128001,
128009
],
"max_length": 4096,
"temperature": 0.6,
"top_p": 0.9
}
[INFO|trainer.py:667] 2024-10-25 03:46:31,396 >> Using auto half precision backend
[INFO|trainer.py:2243] 2024-10-25 03:46:32,073 >> ***** Running training *****
[INFO|trainer.py:2244] 2024-10-25 03:46:32,073 >> Num examples = 117
[INFO|trainer.py:2245] 2024-10-25 03:46:32,073 >> Num Epochs = 12
[INFO|trainer.py:2246] 2024-10-25 03:46:32,073 >> Instantaneous batch size per device = 2
[INFO|trainer.py:2249] 2024-10-25 03:46:32,073 >> Total train batch size (w. parallel, distributed & accumulation) = 8
[INFO|trainer.py:2250] 2024-10-25 03:46:32,073 >> Gradient Accumulation steps = 4
[INFO|trainer.py:2251] 2024-10-25 03:46:32,073 >> Total optimization steps = 168
[INFO|trainer.py:2252] 2024-10-25 03:46:32,076 >> Number of trainable parameters = 20,971,520
[INFO|trainer.py:3705] 2024-10-25 03:48:12,417 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-10
[INFO|configuration_utils.py:675] 2024-10-25 03:48:12,560 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:48:12,561 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:48:12,727 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-10/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:48:12,727 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-10/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:49:48,740 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-20
[INFO|configuration_utils.py:675] 2024-10-25 03:49:48,862 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:49:48,862 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:49:48,979 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-20/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:49:48,979 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-20/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:51:28,894 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-30
[INFO|configuration_utils.py:675] 2024-10-25 03:51:29,020 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:51:29,021 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:51:29,142 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-30/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:51:29,142 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-30/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:53:07,878 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-40
[INFO|configuration_utils.py:675] 2024-10-25 03:53:08,011 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:53:08,012 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:53:08,124 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-40/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:53:08,124 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-40/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:54:47,749 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-50
[INFO|configuration_utils.py:675] 2024-10-25 03:54:47,877 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:54:47,878 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:54:47,975 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-50/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:54:47,976 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-50/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:56:27,690 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-60
[INFO|configuration_utils.py:675] 2024-10-25 03:56:27,819 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:56:27,820 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:56:27,992 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-60/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:56:27,992 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-60/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:58:09,117 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-70
[INFO|configuration_utils.py:675] 2024-10-25 03:58:09,246 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:58:09,247 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:58:09,362 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-70/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:58:09,362 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-70/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 03:59:47,308 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-80
[INFO|configuration_utils.py:675] 2024-10-25 03:59:47,439 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 03:59:47,441 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 03:59:47,589 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-80/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 03:59:47,590 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-80/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:01:26,666 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-90
[INFO|configuration_utils.py:675] 2024-10-25 04:01:26,785 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:01:26,785 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:01:26,899 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-90/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:01:26,899 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-90/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:03:03,936 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-100
[INFO|configuration_utils.py:675] 2024-10-25 04:03:04,059 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:03:04,060 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:03:04,172 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-100/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:03:04,173 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-100/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:04:41,054 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-110
[INFO|configuration_utils.py:675] 2024-10-25 04:04:41,178 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:04:41,179 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:04:41,293 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-110/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:04:41,293 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-110/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:06:21,626 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-120
[INFO|configuration_utils.py:675] 2024-10-25 04:06:21,760 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:06:21,760 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:06:21,876 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-120/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:06:21,876 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-120/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:08:03,973 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-130
[INFO|configuration_utils.py:675] 2024-10-25 04:08:04,099 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:08:04,100 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:08:04,218 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-130/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:08:04,218 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-130/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:09:44,060 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-140
[INFO|configuration_utils.py:675] 2024-10-25 04:09:44,183 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:09:44,184 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:09:44,300 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-140/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:09:44,300 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-140/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:11:24,827 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-150
[INFO|configuration_utils.py:675] 2024-10-25 04:11:24,999 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:11:25,000 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:11:25,140 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-150/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:11:25,141 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-150/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:13:04,925 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-160
[INFO|configuration_utils.py:675] 2024-10-25 04:13:05,054 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:13:05,055 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:13:05,180 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-160/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:13:05,180 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-160/special_tokens_map.json
[INFO|trainer.py:3705] 2024-10-25 04:14:21,670 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-168
[INFO|configuration_utils.py:675] 2024-10-25 04:14:21,811 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:14:21,812 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:14:21,952 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-168/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:14:21,952 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/checkpoint-168/special_tokens_map.json
[INFO|trainer.py:2505] 2024-10-25 04:14:22,390 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:3705] 2024-10-25 04:14:22,392 >> Saving model checkpoint to saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38
[INFO|configuration_utils.py:675] 2024-10-25 04:14:22,531 >> loading configuration file config.json from cache at /home/yiyangai/.cache/huggingface/hub/models--meta-llama--Meta-Llama-3-8B-Instruct/snapshots/5f0b02c75b57c5855da9ae460ce51323ea669d8a/config.json
[INFO|configuration_utils.py:742] 2024-10-25 04:14:22,532 >> Model config LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.45.2",
"use_cache": true,
"vocab_size": 128256
}
[INFO|tokenization_utils_base.py:2641] 2024-10-25 04:14:22,645 >> tokenizer config file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/tokenizer_config.json
[INFO|tokenization_utils_base.py:2650] 2024-10-25 04:14:22,645 >> Special tokens file saved in saves/Llama-3-8B-Instruct/lora/train_2024-10-25-03-39-38/special_tokens_map.json
[INFO|modelcard.py:449] 2024-10-25 04:14:22,993 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}