0%| | 0/478 [00:00> Could not estimate the number of tokens of the input, floating-point operations will not be computed 0%| | 2/478 [00:03<12:43, 1.60s/it] 5%|▌ | 25/478 [00:32<09:39, 1.28s/it] 10%|█ | 50/478 [01:05<09:13, 1.29s/it] 16%|█▌ | 75/478 [01:37<08:37, 1.28s/it] 21%|██ | 100/478 [02:09<08:04, 1.28s/it][INFO|trainer.py:3614] 2024-04-26 15:59:29,412 >> ***** Running Evaluation ***** [INFO|trainer.py:3616] 2024-04-26 15:59:29,412 >> Num examples = 2000 [INFO|trainer.py:3619] 2024-04-26 15:59:29,412 >> Batch size = 8 6%|▋ | 2/32 [00:00<00:03, 8.89it/s] [INFO|configuration_utils.py:471] 2024-04-26 15:59:37,711 >> Configuration saved in ./checkpoint-100/config.json [INFO|configuration_utils.py:697] 2024-04-26 15:59:37,713 >> Configuration saved in ./checkpoint-100/generation_config.json {'eval_loss': 0.6759119629859924, 'eval_runtime': 8.2733, 'eval_samples_per_second': 241.742, 'eval_steps_per_second': 3.868, 'eval_rewards/chosen': 0.0017230990342795849, 'eval_rewards/rejected': -0.03281649947166443, 'eval_rewards/accuracies': 0.62890625, 'eval_rewards/margins': 0.0345395989716053, 'eval_logps/rejected': -407.8036804199219, 'eval_logps/chosen': -423.0196533203125, 'eval_logits/rejected': -3.2565112113952637, 'eval_logits/chosen': -3.313567638397217, 'epoch': 0.21} [INFO|modeling_utils.py:2598] 2024-04-26 15:59:47,330 >> The model is bigger than the maximum size per checkpoint (5GB) and is going to be split in 2 checkpoint shards. You can find where each parameters has been saved in the index located at ./checkpoint-100/model.safetensors.index.json. [INFO|tokenization_utils_base.py:2488] 2024-04-26 15:59:47,344 >> tokenizer config file saved in ./checkpoint-100/tokenizer_config.json