llama-7b-sst-2

This model is a fine-tuned version of meta-llama/Llama-3.1-8B-Instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2342
  • Accuracy: 0.9117
  • F1: 0.9126

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
No log 0.1900 100 0.4794 0.7810 0.7975
No log 0.3800 200 0.3449 0.8555 0.8591
No log 0.5701 300 0.3052 0.8796 0.8832
No log 0.7601 400 0.2865 0.8819 0.8847
1.638 0.9501 500 0.2738 0.8922 0.8932
1.638 1.1387 600 0.2604 0.9014 0.9025
1.638 1.3287 700 0.2683 0.9060 0.9040
1.638 1.5188 800 0.2525 0.9106 0.9099
1.638 1.7088 900 0.2596 0.9083 0.9119
0.9792 1.8988 1000 0.2422 0.9128 0.9126
0.9792 2.0874 1100 0.2426 0.9106 0.9101
0.9792 2.2774 1200 0.2465 0.9151 0.9176
0.9792 2.4675 1300 0.2411 0.9117 0.9118
0.9792 2.6575 1400 0.2356 0.9106 0.9114
0.8907 2.8475 1500 0.2342 0.9117 0.9126

Framework versions

  • PEFT 0.14.0
  • Transformers 4.47.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
8
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for BayanDuygu/llama-7b-sst-2

Adapter
(563)
this model