|
|
|
0%| | 0/478 [00:00<?, ?it/s][WARNING|modeling_utils.py:1188] 2024-04-26 15:57:21,671 >> Could not estimate the number of tokens of the input, floating-point operations will not be computed |
|
0%| | 2/478 [00:03<12:43, 1.60s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5%|β | 25/478 [00:32<09:39, 1.28s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10%|β | 50/478 [01:05<09:13, 1.29s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16%|ββ | 75/478 [01:37<08:37, 1.28s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21%|ββ | 100/478 [02:09<08:04, 1.28s/it][INFO|trainer.py:3614] 2024-04-26 15:59:29,412 >> ***** Running Evaluation ***** |
|
[INFO|trainer.py:3616] 2024-04-26 15:59:29,412 >> Num examples = 2000 |
|
[INFO|trainer.py:3619] 2024-04-26 15:59:29,412 >> Batch size = 8 |
|
6%|β | 2/32 [00:00<00:03, 8.89it/s] |
|
|
|
|
|
|
|
[INFO|configuration_utils.py:471] 2024-04-26 15:59:37,711 >> Configuration saved in ./checkpoint-100/config.json |
|
[INFO|configuration_utils.py:697] 2024-04-26 15:59:37,713 >> Configuration saved in ./checkpoint-100/generation_config.json |
|
{'eval_loss': 0.6759119629859924, 'eval_runtime': 8.2733, 'eval_samples_per_second': 241.742, 'eval_steps_per_second': 3.868, 'eval_rewards/chosen': 0.0017230990342795849, 'eval_rewards/rejected': -0.03281649947166443, 'eval_rewards/accuracies': 0.62890625, 'eval_rewards/margins': 0.0345395989716053, 'eval_logps/rejected': -407.8036804199219, 'eval_logps/chosen': -423.0196533203125, 'eval_logits/rejected': -3.2565112113952637, 'eval_logits/chosen': -3.313567638397217, 'epoch': 0.21} |
|
[INFO|modeling_utils.py:2598] 2024-04-26 15:59:47,330 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at ./checkpoint-100/model.safetensors.index.json. |
|
[INFO|tokenization_utils_base.py:2488] 2024-04-26 15:59:47,344 >> tokenizer config file saved in ./checkpoint-100/tokenizer_config.json |
|
[INFO|tokenization_utils_base.py:2497] 2024-04-26 15:59:47,384 >> Special tokens file saved in ./checkpoint-100/special_tokens_map.json |
|
[INFO|tokenization_utils_base.py:2488] 2024-04-26 16:00:07,103 >> tokenizer config file saved in ./tokenizer_config.json |
|
[INFO|tokenization_utils_base.py:2497] 2024-04-26 16:00:07,105 >> Special tokens file saved in ./special_tokens_map.json |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26%|βββ | 126/478 [03:19<07:28, 1.27s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32%|ββββ | 151/478 [03:51<06:56, 1.28s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37%|ββββ | 176/478 [04:23<06:28, 1.29s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42%|βββββ | 200/478 [04:54<05:52, 1.27s/it][INFO|trainer.py:3614] 2024-04-26 16:02:14,470 >> ***** Running Evaluation ***** |
|
[INFO|trainer.py:3616] 2024-04-26 16:02:14,470 >> Num examples = 2000 |
|
[INFO|trainer.py:3619] 2024-04-26 16:02:14,470 >> Batch size = 8 |
|
19%|ββ | 6/32 [00:01<00:06, 4.22it/s] |
|
|
|
|
|
|
|
[INFO|configuration_utils.py:471] 2024-04-26 16:02:22,770 >> Configuration saved in ./checkpoint-200/config.json |
|
[INFO|configuration_utils.py:697] 2024-04-26 16:02:22,773 >> Configuration saved in ./checkpoint-200/generation_config.json |
|
{'eval_loss': 0.6533502340316772, 'eval_runtime': 8.2763, 'eval_samples_per_second': 241.653, 'eval_steps_per_second': 3.866, 'eval_rewards/chosen': -0.06664139777421951, 'eval_rewards/rejected': -0.16173213720321655, 'eval_rewards/accuracies': 0.64453125, 'eval_rewards/margins': 0.09509073942899704, 'eval_logps/rejected': -420.6952209472656, 'eval_logps/chosen': -429.85614013671875, 'eval_logits/rejected': -3.2240023612976074, 'eval_logits/chosen': -3.2767982482910156, 'epoch': 0.42} |
|
[INFO|modeling_utils.py:2598] 2024-04-26 16:02:32,167 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at ./checkpoint-200/model.safetensors.index.json. |
|
[INFO|tokenization_utils_base.py:2488] 2024-04-26 16:02:32,170 >> tokenizer config file saved in ./checkpoint-200/tokenizer_config.json |
|
[INFO|tokenization_utils_base.py:2497] 2024-04-26 16:02:32,172 >> Special tokens file saved in ./checkpoint-200/special_tokens_map.json |
|
[INFO|tokenization_utils_base.py:2488] 2024-04-26 16:02:50,674 >> tokenizer config file saved in ./tokenizer_config.json |
|
[INFO|tokenization_utils_base.py:2497] 2024-04-26 16:02:50,676 >> Special tokens file saved in ./special_tokens_map.json |
|
[INFO|trainer.py:3397] 2024-04-26 16:02:50,704 >> Deleting older checkpoint [checkpoint-100] due to args.save_total_limit |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47%|βββββ | 226/478 [06:05<05:20, 1.27s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52%|ββββββ | 250/478 [06:36<05:00, 1.32s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58%|ββββββ | 275/478 [07:09<04:25, 1.31s/it] |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
63%|βββββββ | 300/478 [07:42<03:53, 1.31s/it][INFO|trainer.py:3614] 2024-04-26 16:05:02,769 >> ***** Running Evaluation ***** |
|
[INFO|trainer.py:3616] 2024-04-26 16:05:02,769 >> Num examples = 2000 |
|
[INFO|trainer.py:3619] 2024-04-26 16:05:02,769 >> Batch size = 8 |
|
12%|ββ | 4/32 [00:00<00:05, 5.09it/s] |
|
|
|
|
|
|
|
[INFO|configuration_utils.py:471] 2024-04-26 16:05:11,146 >> Configuration saved in ./checkpoint-300/config.json |
|
[INFO|configuration_utils.py:697] 2024-04-26 16:05:11,149 >> Configuration saved in ./checkpoint-300/generation_config.json |
|
{'eval_loss': 0.6438009142875671, 'eval_runtime': 8.3559, 'eval_samples_per_second': 239.351, 'eval_steps_per_second': 3.83, 'eval_rewards/chosen': -0.10771973431110382, 'eval_rewards/rejected': -0.24101632833480835, 'eval_rewards/accuracies': 0.62109375, 'eval_rewards/margins': 0.13329659402370453, 'eval_logps/rejected': -428.6236572265625, 'eval_logps/chosen': -433.9639892578125, 'eval_logits/rejected': -3.2049574851989746, 'eval_logits/chosen': -3.2553329467773438, 'epoch': 0.63} |
|
[INFO|modeling_utils.py:2598] 2024-04-26 16:05:20,618 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at ./checkpoint-300/model.safetensors.index.json. |
|
[INFO|tokenization_utils_base.py:2488] 2024-04-26 16:05:20,621 >> tokenizer config file saved in ./checkpoint-300/tokenizer_config.json |
|
|