llama2-13b-lora-alpaca-11-v1 / train_results.json
chansung's picture
Model save
1f78af0 verified
raw
history blame contribute delete
250 Bytes
{
"epoch": 0.9937888198757764,
"total_flos": 8.089924109389005e+17,
"train_loss": 2.0078666627407076,
"train_runtime": 559.3432,
"train_samples": 51241,
"train_samples_per_second": 36.804,
"train_steps_per_second": 0.143
}