--- license: apache-2.0 base_model: google/vit-base-patch16-384 tags: - generated_from_trainer datasets: - webdataset metrics: - accuracy - f1 - precision - recall model-index: - name: vit-base-patch16-384-finetuned_v2024-7-25-frost results: - task: name: Image Classification type: image-classification dataset: name: webdataset type: webdataset config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9746666666666667 - name: F1 type: f1 value: 0.9372937293729373 - name: Precision type: precision value: 0.9342105263157895 - name: Recall type: recall value: 0.9403973509933775 --- # vit-base-patch16-384-finetuned_v2024-7-25-frost This model is a fine-tuned version of [google/vit-base-patch16-384](https://huggingface.co/google/vit-base-patch16-384) on the webdataset dataset. It achieves the following results on the evaluation set: - Loss: 0.0795 - Accuracy: 0.9747 - F1: 0.9373 - Precision: 0.9342 - Recall: 0.9404 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0624 | 1.7544 | 100 | 0.0458 | 0.9867 | 0.9665 | 0.9774 | 0.9558 | | 0.0729 | 3.5088 | 200 | 0.0942 | 0.9689 | 0.9220 | 0.9303 | 0.9139 | | 0.0566 | 5.2632 | 300 | 0.0802 | 0.972 | 0.9311 | 0.9221 | 0.9404 | | 0.051 | 7.0175 | 400 | 0.0965 | 0.9631 | 0.9066 | 0.9243 | 0.8896 | | 0.0686 | 8.7719 | 500 | 0.0795 | 0.9747 | 0.9373 | 0.9342 | 0.9404 | | 0.0271 | 10.5263 | 600 | 0.0935 | 0.9693 | 0.9239 | 0.9229 | 0.9249 | | 0.0273 | 12.2807 | 700 | 0.0975 | 0.9716 | 0.9300 | 0.9219 | 0.9382 | | 0.0445 | 14.0351 | 800 | 0.0910 | 0.9698 | 0.9248 | 0.9268 | 0.9227 | | 0.0217 | 15.7895 | 900 | 0.0942 | 0.9698 | 0.9243 | 0.9326 | 0.9161 | | 0.0257 | 17.5439 | 1000 | 0.0906 | 0.9684 | 0.9210 | 0.9283 | 0.9139 | | 0.0188 | 19.2982 | 1100 | 0.1028 | 0.9676 | 0.9181 | 0.9338 | 0.9029 | | 0.0196 | 21.0526 | 1200 | 0.1020 | 0.9698 | 0.9244 | 0.9306 | 0.9183 | | 0.025 | 22.8070 | 1300 | 0.1005 | 0.9702 | 0.9258 | 0.9289 | 0.9227 | | 0.009 | 24.5614 | 1400 | 0.0976 | 0.9729 | 0.9324 | 0.9356 | 0.9294 | | 0.0184 | 26.3158 | 1500 | 0.0987 | 0.9716 | 0.9290 | 0.9332 | 0.9249 | | 0.0048 | 28.0702 | 1600 | 0.0958 | 0.972 | 0.9301 | 0.9353 | 0.9249 | | 0.0072 | 29.8246 | 1700 | 0.0948 | 0.972 | 0.9301 | 0.9353 | 0.9249 | ### Framework versions - Transformers 4.42.4 - Pytorch 2.3.1+cu121 - Datasets 2.20.0 - Tokenizers 0.19.1