rationale_model_e3_save5000_f3

This model is a fine-tuned version of meta-llama/Llama-3.2-1B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9449

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1.7548 0.1907 1000 1.9449
1.3912 0.3814 2000 2.0170
0.9994 0.5721 3000 2.1840
0.6462 0.7628 4000 2.3842
0.3791 0.9535 5000 2.7062
0.2017 1.1442 6000 2.9006
0.1723 1.3349 7000 3.0643
0.1449 1.5256 8000 3.2089
0.1292 1.7162 9000 3.3468
0.1177 1.9069 10000 3.4686
0.0996 2.0976 11000 3.5735
0.0966 2.2883 12000 3.6707
0.0913 2.4790 13000 3.7954
0.0916 2.6697 14000 3.9067
0.0859 2.8604 15000 3.9867

Framework versions

  • Transformers 4.45.0
  • Pytorch 2.3.0
  • Datasets 2.14.4
  • Tokenizers 0.20.3
Downloads last month
4
Safetensors
Model size
1.24B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Heejindo/rationale_model_e3_save5000_f3

Finetuned
(230)
this model