[INFO|configuration_utils.py:672] 2024-10-16 11:45:22,718 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 11:45:22,721 >> Model config Qwen2Config {
  "_name_or_path": "Qwen/Qwen2.5-7B",
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:29,567 >> loading file vocab.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/vocab.json

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:29,567 >> loading file merges.txt from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/merges.txt

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:29,567 >> loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/tokenizer.json

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:29,567 >> loading file added_tokens.json from cache at None

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:29,567 >> loading file special_tokens_map.json from cache at None

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:29,567 >> loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/tokenizer_config.json

[INFO|tokenization_utils_base.py:2478] 2024-10-16 11:45:29,842 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

[INFO|configuration_utils.py:672] 2024-10-16 11:45:30,787 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 11:45:30,788 >> Model config Qwen2Config {
  "_name_or_path": "Qwen/Qwen2.5-7B",
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:31,022 >> loading file vocab.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/vocab.json

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:31,023 >> loading file merges.txt from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/merges.txt

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:31,023 >> loading file tokenizer.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/tokenizer.json

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:31,023 >> loading file added_tokens.json from cache at None

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:31,023 >> loading file special_tokens_map.json from cache at None

[INFO|tokenization_utils_base.py:2214] 2024-10-16 11:45:31,023 >> loading file tokenizer_config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/tokenizer_config.json

[INFO|tokenization_utils_base.py:2478] 2024-10-16 11:45:31,286 >> Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

[INFO|configuration_utils.py:672] 2024-10-16 11:45:38,164 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 11:45:38,165 >> Model config Qwen2Config {
  "_name_or_path": "Qwen/Qwen2.5-7B",
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|modeling_utils.py:3726] 2024-10-16 11:45:39,187 >> loading weights file model.safetensors from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/model.safetensors.index.json

[INFO|modeling_utils.py:1622] 2024-10-16 11:58:39,508 >> Instantiating Qwen2ForCausalLM model under default dtype torch.bfloat16.

[INFO|configuration_utils.py:1099] 2024-10-16 11:58:39,510 >> Generate config GenerationConfig {
  "bos_token_id": 151643,
  "eos_token_id": 151643
}


[INFO|modeling_utils.py:4568] 2024-10-16 11:58:42,192 >> All model checkpoint weights were used when initializing Qwen2ForCausalLM.


[INFO|modeling_utils.py:4576] 2024-10-16 11:58:42,192 >> All the weights of Qwen2ForCausalLM were initialized from the model checkpoint at Qwen/Qwen2.5-7B.
If your task is similar to the task the model of the checkpoint was trained on, you can already use Qwen2ForCausalLM for predictions without further training.

[INFO|configuration_utils.py:1054] 2024-10-16 11:58:42,674 >> loading configuration file generation_config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/generation_config.json

[INFO|configuration_utils.py:1099] 2024-10-16 11:58:42,674 >> Generate config GenerationConfig {
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "max_new_tokens": 2048
}


[INFO|trainer.py:667] 2024-10-16 11:58:43,113 >> Using auto half precision backend

[INFO|trainer.py:2243] 2024-10-16 11:58:44,082 >> ***** Running training *****

[INFO|trainer.py:2244] 2024-10-16 11:58:44,082 >>   Num examples = 4,244

[INFO|trainer.py:2245] 2024-10-16 11:58:44,082 >>   Num Epochs = 6

[INFO|trainer.py:2246] 2024-10-16 11:58:44,082 >>   Instantaneous batch size per device = 2

[INFO|trainer.py:2249] 2024-10-16 11:58:44,082 >>   Total train batch size (w. parallel, distributed & accumulation) = 32

[INFO|trainer.py:2250] 2024-10-16 11:58:44,082 >>   Gradient Accumulation steps = 8

[INFO|trainer.py:2251] 2024-10-16 11:58:44,082 >>   Total optimization steps = 792

[INFO|trainer.py:2252] 2024-10-16 11:58:44,087 >>   Number of trainable parameters = 20,185,088

[INFO|trainer.py:3705] 2024-10-16 12:09:11,520 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-100

[INFO|configuration_utils.py:672] 2024-10-16 12:09:12,094 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 12:09:12,095 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 12:09:12,264 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-100/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 12:09:12,264 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-100/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 12:19:29,954 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-200

[INFO|configuration_utils.py:672] 2024-10-16 12:19:30,553 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 12:19:30,554 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 12:19:30,712 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-200/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 12:19:30,712 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-200/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 12:29:59,906 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-300

[INFO|configuration_utils.py:672] 2024-10-16 12:30:00,505 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 12:30:00,506 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 12:30:00,660 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-300/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 12:30:00,661 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-300/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 12:40:38,339 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-400

[INFO|configuration_utils.py:672] 2024-10-16 12:40:39,786 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 12:40:39,788 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 12:40:39,954 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-400/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 12:40:39,955 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-400/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 12:51:16,635 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-500

[INFO|configuration_utils.py:672] 2024-10-16 12:51:18,143 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 12:51:18,144 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 12:51:18,303 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-500/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 12:51:18,304 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-500/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 13:01:37,451 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-600

[INFO|configuration_utils.py:672] 2024-10-16 13:01:38,641 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 13:01:38,642 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 13:01:38,796 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-600/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 13:01:38,797 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-600/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 13:11:55,082 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-700

[INFO|configuration_utils.py:672] 2024-10-16 13:11:56,185 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 13:11:56,186 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 13:11:56,342 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-700/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 13:11:56,342 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-700/special_tokens_map.json

[INFO|trainer.py:3705] 2024-10-16 13:21:39,927 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-792

[INFO|configuration_utils.py:672] 2024-10-16 13:21:40,893 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 13:21:40,894 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 13:21:41,053 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-792/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 13:21:41,053 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/checkpoint-792/special_tokens_map.json

[INFO|trainer.py:2505] 2024-10-16 13:21:41,374 >> 

Training completed. Do not forget to share your model on huggingface.co/models =)



[INFO|trainer.py:3705] 2024-10-16 13:21:41,376 >> Saving model checkpoint to saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16

[INFO|configuration_utils.py:672] 2024-10-16 13:21:42,154 >> loading configuration file config.json from cache at /home/.cache/huggingface/hub/models--Qwen--Qwen2.5-7B/snapshots/d149729398750b98c0af14eb82c78cfe92750796/config.json

[INFO|configuration_utils.py:739] 2024-10-16 13:21:42,154 >> Model config Qwen2Config {
  "architectures": [
    "Qwen2ForCausalLM"
  ],
  "attention_dropout": 0.0,
  "bos_token_id": 151643,
  "eos_token_id": 151643,
  "hidden_act": "silu",
  "hidden_size": 3584,
  "initializer_range": 0.02,
  "intermediate_size": 18944,
  "max_position_embeddings": 131072,
  "max_window_layers": 28,
  "model_type": "qwen2",
  "num_attention_heads": 28,
  "num_hidden_layers": 28,
  "num_key_value_heads": 4,
  "rms_norm_eps": 1e-06,
  "rope_scaling": null,
  "rope_theta": 1000000.0,
  "sliding_window": null,
  "tie_word_embeddings": false,
  "torch_dtype": "bfloat16",
  "transformers_version": "4.45.0",
  "use_cache": true,
  "use_mrope": false,
  "use_sliding_window": false,
  "vocab_size": 152064
}


[INFO|tokenization_utils_base.py:2649] 2024-10-16 13:21:42,295 >> tokenizer config file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/tokenizer_config.json

[INFO|tokenization_utils_base.py:2658] 2024-10-16 13:21:42,295 >> Special tokens file saved in saves/Qwen2.5-7B/lora/4k_train_2024-10-16-11-44-16/special_tokens_map.json

[INFO|modelcard.py:449] 2024-10-16 13:21:42,527 >> Dropping the following result as it does not have all the necessary fields:
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}}