vintage-lavender619's picture
End of training
dcca97a verified
metadata
license: apache-2.0
base_model: google/vit-large-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-large-patch16-224-finetuned-landscape-test
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.909375

vit-large-patch16-224-finetuned-landscape-test

This model is a fine-tuned version of google/vit-large-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3101
  • Accuracy: 0.9094

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3906 1.0 10 1.1521 0.4969
0.914 2.0 20 0.7812 0.6687
0.6704 3.0 30 0.5566 0.7688
0.4609 4.0 40 0.4363 0.8313
0.404 5.0 50 0.4807 0.8156
0.3948 6.0 60 0.4216 0.8531
0.3535 7.0 70 0.3281 0.8688
0.3107 8.0 80 0.2972 0.9
0.3086 9.0 90 0.3328 0.8812
0.2564 10.0 100 0.3517 0.8875
0.2654 11.0 110 0.3985 0.8594
0.2733 12.0 120 0.2870 0.9062
0.2511 13.0 130 0.4177 0.8875
0.2762 14.0 140 0.3579 0.8938
0.2188 15.0 150 0.3348 0.8906
0.2265 16.0 160 0.3046 0.9031
0.2054 17.0 170 0.3305 0.8969
0.1951 18.0 180 0.3576 0.8812
0.1762 19.0 190 0.3985 0.8812
0.2264 20.0 200 0.3711 0.9031
0.1958 21.0 210 0.3259 0.8875
0.1765 22.0 220 0.3804 0.8938
0.1859 23.0 230 0.3464 0.9
0.1915 24.0 240 0.3742 0.8906
0.1667 25.0 250 0.3200 0.9062
0.1744 26.0 260 0.3545 0.8938
0.1595 27.0 270 0.3101 0.9094
0.1793 28.0 280 0.3230 0.8969
0.1596 29.0 290 0.3268 0.9
0.169 30.0 300 0.3321 0.8969

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1