hkivancoral's picture
End of training
3bdbe9c
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_beit_base_sgd_0001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.77

smids_3x_beit_base_sgd_0001_fold4

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5543
  • Accuracy: 0.77

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.2016 1.0 225 1.2841 0.345
1.1719 2.0 450 1.2211 0.3617
1.0758 3.0 675 1.1630 0.3733
1.0147 4.0 900 1.1086 0.4033
1.0074 5.0 1125 1.0560 0.4317
0.9405 6.0 1350 1.0063 0.4617
0.9199 7.0 1575 0.9602 0.51
0.9125 8.0 1800 0.9177 0.5617
0.8654 9.0 2025 0.8771 0.6017
0.8229 10.0 2250 0.8432 0.6333
0.8209 11.0 2475 0.8129 0.6567
0.775 12.0 2700 0.7860 0.675
0.7435 13.0 2925 0.7620 0.6883
0.7034 14.0 3150 0.7408 0.695
0.7434 15.0 3375 0.7223 0.7033
0.7412 16.0 3600 0.7055 0.7133
0.6871 17.0 3825 0.6906 0.7167
0.6997 18.0 4050 0.6769 0.725
0.6998 19.0 4275 0.6646 0.7267
0.6623 20.0 4500 0.6540 0.7283
0.668 21.0 4725 0.6441 0.73
0.6697 22.0 4950 0.6349 0.7317
0.6394 23.0 5175 0.6268 0.7383
0.6267 24.0 5400 0.6193 0.7383
0.6154 25.0 5625 0.6125 0.7433
0.5813 26.0 5850 0.6070 0.745
0.612 27.0 6075 0.6014 0.7483
0.6011 28.0 6300 0.5964 0.7483
0.5913 29.0 6525 0.5915 0.7517
0.5609 30.0 6750 0.5872 0.76
0.5861 31.0 6975 0.5835 0.7617
0.5483 32.0 7200 0.5800 0.76
0.5986 33.0 7425 0.5766 0.7633
0.619 34.0 7650 0.5736 0.7617
0.5813 35.0 7875 0.5710 0.765
0.6084 36.0 8100 0.5683 0.7667
0.6052 37.0 8325 0.5664 0.765
0.5601 38.0 8550 0.5646 0.765
0.5878 39.0 8775 0.5631 0.7633
0.6072 40.0 9000 0.5616 0.7633
0.5597 41.0 9225 0.5601 0.7683
0.5694 42.0 9450 0.5588 0.7667
0.5553 43.0 9675 0.5575 0.77
0.5942 44.0 9900 0.5566 0.77
0.6005 45.0 10125 0.5559 0.77
0.58 46.0 10350 0.5553 0.77
0.5814 47.0 10575 0.5548 0.77
0.5609 48.0 10800 0.5545 0.7717
0.6076 49.0 11025 0.5543 0.77
0.5819 50.0 11250 0.5543 0.77

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2