llama3.1-8b-gpt4o_100k_closedqa-k / train_results.json
chansung's picture
Model save
b7b7975 verified
raw
history blame
224 Bytes
{
"epoch": 1.0,
"total_flos": 7.558147382936863e+17,
"train_loss": 0.0,
"train_runtime": 1.5255,
"train_samples": 111440,
"train_samples_per_second": 10721.267,
"train_steps_per_second": 167.817
}