Temo27Anas's picture
Model save
eadf107 verified
metadata
license: mit
base_model: google/vivit-b-16x2-kinetics400
tags:
  - generated_from_trainer
metrics:
  - accuracy
model-index:
  - name: vivit-b-16x2-kinetics400-ft-92397
    results: []

vivit-b-16x2-kinetics400-ft-92397

This model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1332
  • Accuracy: 0.3386

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 5500

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1076 0.0202 111 1.1240 0.3333
1.1466 1.0202 222 1.1166 0.3333
1.1022 2.0202 333 1.1221 0.3333
1.109 3.0202 444 1.1170 0.3333
1.1007 4.0202 555 1.1375 0.3333
1.1146 5.0202 666 1.1004 0.3386
1.1575 6.0202 777 1.1249 0.3333
1.0851 7.0202 888 1.1254 0.3439
1.1069 8.0202 999 1.1136 0.3333
1.0998 9.0202 1110 1.0945 0.3545
1.1289 10.0202 1221 1.0992 0.3439
1.0674 11.0202 1332 1.0957 0.3545
1.1144 12.0202 1443 1.1139 0.3228
1.0971 13.0202 1554 1.1089 0.3228
1.0704 14.0202 1665 1.1031 0.3333
1.1064 15.0202 1776 1.1003 0.3492
1.0782 16.0202 1887 1.1026 0.3386
1.1086 17.0202 1998 1.1091 0.3175
1.0911 18.0202 2109 1.0965 0.3386
1.0961 19.0202 2220 1.1108 0.3333
1.0967 20.0202 2331 1.1029 0.3175
1.0746 21.0202 2442 1.1127 0.3333
1.1076 22.0202 2553 1.0996 0.3492
1.0786 23.0202 2664 1.1138 0.3333
1.0819 24.0202 2775 1.0970 0.3651
1.1031 25.0202 2886 1.1135 0.3333
1.092 26.0202 2997 1.1050 0.3439
1.103 27.0202 3108 1.1039 0.3598
1.0903 28.0202 3219 1.1149 0.3333
1.1232 29.0202 3330 1.1062 0.3333
1.106 30.0202 3441 1.1124 0.3175
1.0607 31.0202 3552 1.1095 0.3333
1.0839 32.0202 3663 1.1083 0.3386
1.0867 33.0202 3774 1.1007 0.3545
1.0913 34.0202 3885 1.0996 0.3598
1.0567 35.0202 3996 1.0946 0.3386
1.0877 36.0202 4107 1.1004 0.3280
1.0828 37.0202 4218 1.1074 0.3228
1.131 38.0202 4329 1.0992 0.3122
1.0299 39.0202 4440 1.1035 0.3280
1.0864 40.0202 4551 1.0947 0.3386
1.0643 41.0202 4662 1.1006 0.3545
1.0687 42.0202 4773 1.1056 0.3280
1.0978 43.0202 4884 1.0907 0.3598
1.0273 44.0202 4995 1.0969 0.3439
1.0459 45.0202 5106 1.1021 0.3492
1.0561 46.0202 5217 1.1003 0.3386
1.0482 47.0202 5328 1.1028 0.3386
1.0916 48.0202 5439 1.1053 0.3545
1.0729 49.0111 5500 1.1055 0.3545

Framework versions

  • Transformers 4.41.2
  • Pytorch 1.13.0+cu117
  • Datasets 2.20.0
  • Tokenizers 0.19.1