hkivancoral's picture
End of training
8bf66e1
metadata
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_base_adamax_001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6341463414634146

hushem_1x_deit_base_adamax_001_fold5

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 2.8386
  • Accuracy: 0.6341

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.4880 0.2683
1.545 2.0 12 1.4136 0.2439
1.545 3.0 18 1.3443 0.3171
1.396 4.0 24 1.1663 0.5122
1.3173 5.0 30 1.2019 0.4878
1.3173 6.0 36 1.2222 0.5122
1.3167 7.0 42 1.4763 0.2439
1.3167 8.0 48 1.1385 0.5610
1.2585 9.0 54 1.3584 0.3659
1.2419 10.0 60 1.0949 0.5122
1.2419 11.0 66 1.1100 0.4634
1.1714 12.0 72 1.2381 0.3902
1.1714 13.0 78 1.4043 0.4146
1.0593 14.0 84 1.1047 0.4878
1.0451 15.0 90 0.9907 0.4878
1.0451 16.0 96 1.3026 0.5122
0.8805 17.0 102 1.0082 0.6098
0.8805 18.0 108 1.1309 0.4634
0.8077 19.0 114 1.2367 0.5610
0.8096 20.0 120 1.4920 0.4878
0.8096 21.0 126 1.8018 0.4878
0.6582 22.0 132 1.5639 0.5854
0.6582 23.0 138 1.2712 0.4878
0.5106 24.0 144 1.1237 0.5854
0.4184 25.0 150 1.6831 0.5610
0.4184 26.0 156 2.0109 0.6098
0.2718 27.0 162 2.2516 0.6341
0.2718 28.0 168 2.0767 0.5610
0.1639 29.0 174 2.6167 0.5854
0.0535 30.0 180 2.8485 0.6341
0.0535 31.0 186 2.7124 0.6585
0.0454 32.0 192 2.8298 0.6585
0.0454 33.0 198 3.2241 0.6341
0.091 34.0 204 2.4575 0.5854
0.1109 35.0 210 3.7388 0.5610
0.1109 36.0 216 2.3707 0.7073
0.0834 37.0 222 2.5281 0.6341
0.0834 38.0 228 3.1120 0.6098
0.0051 39.0 234 2.7929 0.6341
0.0015 40.0 240 2.7025 0.6341
0.0015 41.0 246 2.8185 0.6341
0.0008 42.0 252 2.8386 0.6341
0.0008 43.0 258 2.8386 0.6341
0.0006 44.0 264 2.8386 0.6341
0.0007 45.0 270 2.8386 0.6341
0.0007 46.0 276 2.8386 0.6341
0.0007 47.0 282 2.8386 0.6341
0.0007 48.0 288 2.8386 0.6341
0.0006 49.0 294 2.8386 0.6341
0.0007 50.0 300 2.8386 0.6341

Framework versions

  • Transformers 4.35.1
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.7
  • Tokenizers 0.14.1