dropoff-utcustom-train-SF-RGB-b5_3

This model is a fine-tuned version of nvidia/mit-b5 on the sam1120/dropoff-utcustom-TRAIN dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3770
  • Mean Iou: 0.4572
  • Mean Accuracy: 0.7822
  • Overall Accuracy: 0.9640
  • Accuracy Unlabeled: nan
  • Accuracy Dropoff: 0.5839
  • Accuracy Undropoff: 0.9805
  • Iou Unlabeled: 0.0
  • Iou Dropoff: 0.4086
  • Iou Undropoff: 0.9631

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 15
  • eval_batch_size: 15
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.05
  • num_epochs: 120

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Accuracy Unlabeled Accuracy Dropoff Accuracy Undropoff Iou Unlabeled Iou Dropoff Iou Undropoff
1.3135 5.0 10 1.2008 0.0546 0.2586 0.1227 nan 0.4069 0.1103 0.0 0.0535 0.1102
1.2309 10.0 20 1.1294 0.1176 0.3397 0.2490 nan 0.4388 0.2407 0.0 0.1129 0.2400
1.1346 15.0 30 1.0395 0.2171 0.4865 0.5022 nan 0.4694 0.5036 0.0 0.1524 0.4989
1.1088 20.0 40 0.9755 0.2608 0.5521 0.6176 nan 0.4808 0.6235 0.0 0.1661 0.6163
1.007 25.0 50 0.9197 0.2895 0.5959 0.6775 nan 0.5068 0.6849 0.0 0.1923 0.6763
0.9145 30.0 60 0.8635 0.3162 0.6299 0.7335 nan 0.5168 0.7429 0.0 0.2156 0.7329
0.8745 35.0 70 0.8070 0.3398 0.6784 0.7808 nan 0.5667 0.7901 0.0 0.2404 0.7791
0.8088 40.0 80 0.7442 0.3667 0.7191 0.8290 nan 0.5993 0.8389 0.0 0.2730 0.8272
0.7184 45.0 90 0.6956 0.3832 0.7513 0.8603 nan 0.6323 0.8702 0.0 0.2915 0.8580
0.6908 50.0 100 0.6751 0.3931 0.7592 0.8748 nan 0.6332 0.8853 0.0 0.3067 0.8728
0.643 55.0 110 0.6101 0.4134 0.7714 0.9108 nan 0.6194 0.9234 0.0 0.3308 0.9094
0.6014 60.0 120 0.5971 0.4166 0.7826 0.9189 nan 0.6339 0.9313 0.0 0.3324 0.9175
0.5685 65.0 130 0.5595 0.4304 0.7946 0.9328 nan 0.6439 0.9453 0.0 0.3599 0.9314
0.5172 70.0 140 0.5344 0.4373 0.8010 0.9406 nan 0.6488 0.9532 0.0 0.3727 0.9393
0.4757 75.0 150 0.4963 0.4434 0.7997 0.9490 nan 0.6368 0.9626 0.0 0.3822 0.9479
0.4288 80.0 160 0.4599 0.4488 0.7936 0.9556 nan 0.6169 0.9702 0.0 0.3918 0.9546
0.4124 85.0 170 0.4710 0.4469 0.7989 0.9540 nan 0.6296 0.9681 0.0 0.3876 0.9529
0.4995 90.0 180 0.4209 0.4537 0.7883 0.9606 nan 0.6004 0.9762 0.0 0.4015 0.9597
0.3815 95.0 190 0.4287 0.4524 0.7919 0.9595 nan 0.6090 0.9748 0.0 0.3988 0.9586
0.3764 100.0 200 0.4245 0.4529 0.7913 0.9600 nan 0.6073 0.9753 0.0 0.3998 0.9590
0.4074 105.0 210 0.4096 0.4542 0.7894 0.9613 nan 0.6018 0.9769 0.0 0.4021 0.9603
0.3975 110.0 220 0.4107 0.4538 0.7905 0.9610 nan 0.6045 0.9765 0.0 0.4013 0.9601
0.3598 115.0 230 0.3918 0.4558 0.7863 0.9627 nan 0.5939 0.9787 0.0 0.4057 0.9618
0.3709 120.0 240 0.3770 0.4572 0.7822 0.9640 nan 0.5839 0.9805 0.0 0.4086 0.9631

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.1
  • Tokenizers 0.13.3
Downloads last month
6
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.