physiotheraphy-E2 / README.md
khalilUoM's picture
End of training
c133389 verified
|
raw
history blame
No virus
18.8 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
  - f1
  - precision
  - recall
model-index:
  - name: physiotheraphy-E2
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9673024523160763
          - name: F1
            type: f1
            value: 0.9684234987255815
          - name: Precision
            type: precision
            value: 0.9707593418301198
          - name: Recall
            type: recall
            value: 0.9667053446477023

physiotheraphy-E2

This model is a fine-tuned version of google/vit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Accuracy: 0.9673

  • F1: 0.9684

  • Precision: 0.9708

  • Recall: 0.9667

  • Loss: 0.1718

  • Classification Report: precision recall f1-score support

         0       0.92      0.95      0.93        57
         1       0.97      0.99      0.98        70
         2       0.97      1.00      0.99        33
         3       1.00      0.95      0.98        43
         4       0.97      1.00      0.99        34
         5       1.00      0.97      0.98        32
         6       0.97      0.97      0.97        65
         7       0.97      0.91      0.94        33
    

    accuracy 0.97 367 macro avg 0.97 0.97 0.97 367

weighted avg 0.97 0.97 0.97 367

  • Confusion Matrix: [[0.9473684210526315, 0.0, 0.0, 0.0, 0.017543859649122806, 0.0, 0.017543859649122806, 0.017543859649122806], [0.014285714285714285, 0.9857142857142858, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.046511627906976744, 0.0, 0.9534883720930233, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.03125, 0.0, 0.0, 0.0, 0.0, 0.96875, 0.0, 0.0], [0.03076923076923077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9692307692307692, 0.0], [0.030303030303030304, 0.0, 0.030303030303030304, 0.0, 0.0, 0.0, 0.030303030303030304, 0.9090909090909091]]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 8
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Accuracy F1 Precision Recall Validation Loss Classification Report Confusion Matrix
0.9941 0.9973 182 0.6975 0.6724 0.7754 0.6769 0.9489 precision recall f1-score support
       0       0.93      0.46      0.61        57
       1       0.92      0.79      0.85        70
       2       0.80      0.48      0.60        33
       3       0.86      0.70      0.77        43
       4       0.39      1.00      0.56        34
       5       0.74      0.72      0.73        32
       6       0.66      0.94      0.77        65
       7       0.92      0.33      0.49        33

accuracy                           0.70       367

macro avg 0.78 0.68 0.67 367 weighted avg 0.79 0.70 0.70 367 | [[0.45614035087719296, 0.0, 0.05263157894736842, 0.03508771929824561, 0.24561403508771928, 0.05263157894736842, 0.15789473684210525, 0.0], [0.0, 0.7857142857142857, 0.0, 0.04285714285714286, 0.08571428571428572, 0.0, 0.07142857142857142, 0.014285714285714285], [0.0, 0.0, 0.48484848484848486, 0.0, 0.48484848484848486, 0.030303030303030304, 0.0, 0.0], [0.0, 0.023255813953488372, 0.0, 0.6976744186046512, 0.09302325581395349, 0.023255813953488372, 0.16279069767441862, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.0, 0.0625, 0.0, 0.0, 0.21875, 0.71875, 0.0, 0.0], [0.015384615384615385, 0.0, 0.0, 0.0, 0.046153846153846156, 0.0, 0.9384615384615385, 0.0], [0.030303030303030304, 0.06060606060606061, 0.030303030303030304, 0.0, 0.12121212121212122, 0.09090909090909091, 0.3333333333333333, 0.3333333333333333]] | | 0.6919 | 2.0 | 365 | 0.8665 | 0.8633 | 0.8600 | 0.8742 | 0.4393 | precision recall f1-score support

       0       0.84      0.63      0.72        57
       1       0.86      0.93      0.89        70
       2       0.84      0.97      0.90        33
       3       1.00      0.95      0.98        43
       4       0.89      1.00      0.94        34
       5       0.85      0.91      0.88        32
       6       0.97      0.88      0.92        65
       7       0.63      0.73      0.68        33

accuracy                           0.87       367

macro avg 0.86 0.87 0.86 367 weighted avg 0.87 0.87 0.86 367 | [[0.631578947368421, 0.08771929824561403, 0.08771929824561403, 0.0, 0.03508771929824561, 0.07017543859649122, 0.0, 0.08771929824561403], [0.02857142857142857, 0.9285714285714286, 0.0, 0.0, 0.014285714285714285, 0.014285714285714285, 0.0, 0.014285714285714285], [0.0, 0.0, 0.9696969696969697, 0.0, 0.030303030303030304, 0.0, 0.0, 0.0], [0.023255813953488372, 0.0, 0.0, 0.9534883720930233, 0.0, 0.0, 0.0, 0.023255813953488372], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 0.90625, 0.0, 0.09375], [0.03076923076923077, 0.03076923076923077, 0.0, 0.0, 0.0, 0.0, 0.8769230769230769, 0.06153846153846154], [0.06060606060606061, 0.12121212121212122, 0.030303030303030304, 0.0, 0.0, 0.0, 0.06060606060606061, 0.7272727272727273]] | | 0.4322 | 2.9973 | 547 | 0.8501 | 0.8412 | 0.8687 | 0.8387 | 0.6005 | precision recall f1-score support

       0       0.78      0.95      0.86        57
       1       1.00      0.76      0.86        70
       2       0.96      0.76      0.85        33
       3       0.83      0.91      0.87        43
       4       0.72      1.00      0.84        34
       5       0.92      0.75      0.83        32
       6       0.82      0.95      0.88        65
       7       0.91      0.64      0.75        33

accuracy                           0.85       367

macro avg 0.87 0.84 0.84 367 weighted avg 0.87 0.85 0.85 367 | [[0.9473684210526315, 0.0, 0.0, 0.03508771929824561, 0.0, 0.0, 0.017543859649122806, 0.0], [0.05714285714285714, 0.7571428571428571, 0.0, 0.04285714285714286, 0.05714285714285714, 0.014285714285714285, 0.04285714285714286, 0.02857142857142857], [0.030303030303030304, 0.0, 0.7575757575757576, 0.0, 0.12121212121212122, 0.030303030303030304, 0.06060606060606061, 0.0], [0.046511627906976744, 0.0, 0.0, 0.9069767441860465, 0.0, 0.0, 0.046511627906976744, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.09375, 0.0, 0.0, 0.0, 0.15625, 0.75, 0.0, 0.0], [0.015384615384615385, 0.0, 0.0, 0.03076923076923077, 0.0, 0.0, 0.9538461538461539, 0.0], [0.12121212121212122, 0.0, 0.030303030303030304, 0.030303030303030304, 0.0, 0.0, 0.18181818181818182, 0.6363636363636364]] | | 0.2358 | 4.0 | 730 | 0.9401 | 0.9392 | 0.9461 | 0.9370 | 0.2496 | precision recall f1-score support

       0       0.82      0.96      0.89        57
       1       1.00      0.91      0.96        70
       2       1.00      0.94      0.97        33
       3       0.95      0.98      0.97        43
       4       1.00      0.85      0.92        34
       5       0.89      1.00      0.94        32
       6       0.97      0.97      0.97        65
       7       0.94      0.88      0.91        33

accuracy                           0.94       367

macro avg 0.95 0.94 0.94 367 weighted avg 0.95 0.94 0.94 367 | [[0.9649122807017544, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.03508771929824561], [0.02857142857142857, 0.9142857142857143, 0.0, 0.014285714285714285, 0.0, 0.04285714285714286, 0.0, 0.0], [0.06060606060606061, 0.0, 0.9393939393939394, 0.0, 0.0, 0.0, 0.0, 0.0], [0.023255813953488372, 0.0, 0.0, 0.9767441860465116, 0.0, 0.0, 0.0, 0.0], [0.11764705882352941, 0.0, 0.0, 0.0, 0.8529411764705882, 0.029411764705882353, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0], [0.03076923076923077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9692307692307692, 0.0], [0.030303030303030304, 0.0, 0.0, 0.030303030303030304, 0.0, 0.0, 0.06060606060606061, 0.8787878787878788]] | | 0.0904 | 4.9973 | 912 | 0.9401 | 0.9448 | 0.9506 | 0.9429 | 0.2831 | precision recall f1-score support

       0       0.79      0.98      0.88        57
       1       0.98      0.93      0.96        70
       2       1.00      0.94      0.97        33
       3       1.00      0.95      0.98        43
       4       0.97      1.00      0.99        34
       5       0.97      0.94      0.95        32
       6       0.98      0.89      0.94        65
       7       0.91      0.91      0.91        33

accuracy                           0.94       367

macro avg 0.95 0.94 0.94 367 weighted avg 0.95 0.94 0.94 367 | [[0.9824561403508771, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.017543859649122806], [0.04285714285714286, 0.9285714285714286, 0.0, 0.0, 0.0, 0.014285714285714285, 0.0, 0.014285714285714285], [0.030303030303030304, 0.0, 0.9393939393939394, 0.0, 0.030303030303030304, 0.0, 0.0, 0.0], [0.046511627906976744, 0.0, 0.0, 0.9534883720930233, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.03125, 0.03125, 0.0, 0.0, 0.0, 0.9375, 0.0, 0.0], [0.09230769230769231, 0.0, 0.0, 0.0, 0.0, 0.0, 0.8923076923076924, 0.015384615384615385], [0.06060606060606061, 0.0, 0.0, 0.0, 0.0, 0.0, 0.030303030303030304, 0.9090909090909091]] | | 0.0313 | 6.0 | 1095 | 0.9673 | 0.9684 | 0.9708 | 0.9667 | 0.1718 | precision recall f1-score support

       0       0.92      0.95      0.93        57
       1       0.97      0.99      0.98        70
       2       0.97      1.00      0.99        33
       3       1.00      0.95      0.98        43
       4       0.97      1.00      0.99        34
       5       1.00      0.97      0.98        32
       6       0.97      0.97      0.97        65
       7       0.97      0.91      0.94        33

accuracy                           0.97       367

macro avg 0.97 0.97 0.97 367 weighted avg 0.97 0.97 0.97 367 | [[0.9473684210526315, 0.0, 0.0, 0.0, 0.017543859649122806, 0.0, 0.017543859649122806, 0.017543859649122806], [0.014285714285714285, 0.9857142857142858, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.046511627906976744, 0.0, 0.9534883720930233, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.03125, 0.0, 0.0, 0.0, 0.0, 0.96875, 0.0, 0.0], [0.03076923076923077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9692307692307692, 0.0], [0.030303030303030304, 0.0, 0.030303030303030304, 0.0, 0.0, 0.0, 0.030303030303030304, 0.9090909090909091]] | | 0.0047 | 6.9973 | 1277 | 0.9646 | 0.9630 | 0.9675 | 0.9604 | 0.1481 | precision recall f1-score support

       0       0.92      0.96      0.94        57
       1       1.00      0.99      0.99        70
       2       0.97      1.00      0.99        33
       3       0.98      0.98      0.98        43
       4       0.97      1.00      0.99        34
       5       1.00      0.97      0.98        32
       6       0.94      0.97      0.95        65
       7       0.96      0.82      0.89        33

accuracy                           0.96       367

macro avg 0.97 0.96 0.96 367 weighted avg 0.97 0.96 0.96 367 | [[0.9649122807017544, 0.0, 0.0, 0.0, 0.017543859649122806, 0.0, 0.0, 0.017543859649122806], [0.0, 0.9857142857142858, 0.0, 0.014285714285714285, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.023255813953488372, 0.0, 0.0, 0.9767441860465116, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.03125, 0.0, 0.0, 0.0, 0.0, 0.96875, 0.0, 0.0], [0.03076923076923077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9692307692307692, 0.0], [0.030303030303030304, 0.0, 0.030303030303030304, 0.0, 0.0, 0.0, 0.12121212121212122, 0.8181818181818182]] | | 0.0019 | 7.9781 | 1456 | 0.9646 | 0.9630 | 0.9675 | 0.9604 | 0.1477 | precision recall f1-score support

       0       0.92      0.96      0.94        57
       1       1.00      0.99      0.99        70
       2       0.97      1.00      0.99        33
       3       0.98      0.98      0.98        43
       4       0.97      1.00      0.99        34
       5       1.00      0.97      0.98        32
       6       0.94      0.97      0.95        65
       7       0.96      0.82      0.89        33

accuracy                           0.96       367

macro avg 0.97 0.96 0.96 367 weighted avg 0.97 0.96 0.96 367 | [[0.9649122807017544, 0.0, 0.0, 0.0, 0.017543859649122806, 0.0, 0.0, 0.017543859649122806], [0.0, 0.9857142857142858, 0.0, 0.014285714285714285, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], [0.023255813953488372, 0.0, 0.0, 0.9767441860465116, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0], [0.03125, 0.0, 0.0, 0.0, 0.0, 0.96875, 0.0, 0.0], [0.03076923076923077, 0.0, 0.0, 0.0, 0.0, 0.0, 0.9692307692307692, 0.0], [0.030303030303030304, 0.0, 0.030303030303030304, 0.0, 0.0, 0.0, 0.12121212121212122, 0.8181818181818182]] |

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.3.1+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1