Meta-Llama-3-8B-Base-MI-5e-7 / train_results.json
tengxiao1
TX
c0d7b7a
raw
history blame
219 Bytes
{
"epoch": 0.998691442030882,
"total_flos": 0.0,
"train_loss": 0.0,
"train_runtime": 4.2983,
"train_samples": 61135,
"train_samples_per_second": 14223.022,
"train_steps_per_second": 110.974
}