llama3.1-8b-coding-gpt4o-100k / train_results.json
chansung's picture
Model save
2ac2b9a verified
raw
history blame contribute delete
239 Bytes
{
"epoch": 10.0,
"total_flos": 7.760593801491513e+18,
"train_loss": 0.6862981784684318,
"train_runtime": 15610.7374,
"train_samples": 116368,
"train_samples_per_second": 10.757,
"train_steps_per_second": 0.336
}