llama3.2-1b-tamil / train_results.json
HARISHSENTHIL
finetuning with llamafactory using H100 epoch over 30
89de10b
raw
history blame contribute delete
204 Bytes
{
"epoch": 28.2,
"total_flos": 1937902246297600.0,
"train_loss": 1.3188127915064494,
"train_runtime": 64.4224,
"train_samples_per_second": 71.249,
"train_steps_per_second": 0.931
}