dropoff-utcustom-train-SF-RGB-b0_4
This model is a fine-tuned version of nvidia/mit-b0 on the sam1120/dropoff-utcustom-TRAIN dataset. It achieves the following results on the evaluation set:
- Loss: 0.3032
- Mean Iou: 0.6301
- Mean Accuracy: 0.6710
- Overall Accuracy: 0.9634
- Accuracy Unlabeled: nan
- Accuracy Dropoff: 0.3502
- Accuracy Undropoff: 0.9918
- Iou Unlabeled: nan
- Iou Dropoff: 0.2973
- Iou Undropoff: 0.9628
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 120
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Dropoff | Accuracy Undropoff | Iou Unlabeled | Iou Dropoff | Iou Undropoff |
---|---|---|---|---|---|---|---|---|---|---|---|---|
1.0311 | 3.33 | 10 | 1.0742 | 0.2063 | 0.6373 | 0.5492 | nan | 0.7339 | 0.5406 | 0.0 | 0.0848 | 0.5342 |
0.9741 | 6.67 | 20 | 1.0151 | 0.3072 | 0.8067 | 0.7686 | nan | 0.8485 | 0.7649 | 0.0 | 0.1619 | 0.7596 |
0.9441 | 10.0 | 30 | 0.9345 | 0.3432 | 0.8327 | 0.8408 | nan | 0.8239 | 0.8416 | 0.0 | 0.1947 | 0.8348 |
0.8222 | 13.33 | 40 | 0.8358 | 0.3643 | 0.8236 | 0.8773 | nan | 0.7646 | 0.8825 | 0.0 | 0.2199 | 0.8731 |
0.7243 | 16.67 | 50 | 0.7135 | 0.3924 | 0.7838 | 0.9194 | nan | 0.6350 | 0.9325 | 0.0 | 0.2603 | 0.9170 |
0.7213 | 20.0 | 60 | 0.6358 | 0.4054 | 0.7528 | 0.9374 | nan | 0.5502 | 0.9554 | 0.0 | 0.2805 | 0.9359 |
0.5836 | 23.33 | 70 | 0.5604 | 0.4211 | 0.7412 | 0.9505 | nan | 0.5115 | 0.9708 | 0.0 | 0.3139 | 0.9493 |
0.5285 | 26.67 | 80 | 0.5227 | 0.4281 | 0.7570 | 0.9519 | nan | 0.5432 | 0.9708 | 0.0 | 0.3335 | 0.9507 |
0.4955 | 30.0 | 90 | 0.4478 | 0.4191 | 0.6945 | 0.9581 | nan | 0.4052 | 0.9837 | 0.0 | 0.2999 | 0.9573 |
0.4646 | 33.33 | 100 | 0.4537 | 0.4215 | 0.6998 | 0.9584 | nan | 0.4161 | 0.9835 | 0.0 | 0.3069 | 0.9576 |
0.4356 | 36.67 | 110 | 0.4454 | 0.4224 | 0.7105 | 0.9569 | nan | 0.4402 | 0.9808 | 0.0 | 0.3112 | 0.9560 |
0.4829 | 40.0 | 120 | 0.4099 | 0.4196 | 0.6901 | 0.9593 | nan | 0.3947 | 0.9854 | 0.0 | 0.3002 | 0.9585 |
0.4051 | 43.33 | 130 | 0.3911 | 0.6267 | 0.6784 | 0.9607 | nan | 0.3687 | 0.9881 | nan | 0.2933 | 0.9600 |
0.3916 | 46.67 | 140 | 0.3841 | 0.4183 | 0.6897 | 0.9586 | nan | 0.3946 | 0.9847 | 0.0 | 0.2969 | 0.9579 |
0.3713 | 50.0 | 150 | 0.3788 | 0.4248 | 0.7001 | 0.9600 | nan | 0.4149 | 0.9853 | 0.0 | 0.3150 | 0.9593 |
0.359 | 53.33 | 160 | 0.3719 | 0.6254 | 0.6761 | 0.9607 | nan | 0.3639 | 0.9883 | nan | 0.2908 | 0.9601 |
0.3459 | 56.67 | 170 | 0.3610 | 0.6245 | 0.6774 | 0.9601 | nan | 0.3673 | 0.9876 | nan | 0.2895 | 0.9594 |
0.3099 | 60.0 | 180 | 0.3455 | 0.6246 | 0.6687 | 0.9620 | nan | 0.3468 | 0.9905 | nan | 0.2879 | 0.9614 |
0.3124 | 63.33 | 190 | 0.3436 | 0.6277 | 0.6763 | 0.9615 | nan | 0.3634 | 0.9892 | nan | 0.2946 | 0.9608 |
0.3283 | 66.67 | 200 | 0.3344 | 0.6237 | 0.6607 | 0.9634 | nan | 0.3286 | 0.9928 | nan | 0.2845 | 0.9629 |
0.2974 | 70.0 | 210 | 0.3412 | 0.6312 | 0.6817 | 0.9616 | nan | 0.3746 | 0.9888 | nan | 0.3014 | 0.9609 |
0.3003 | 73.33 | 220 | 0.3322 | 0.6320 | 0.6877 | 0.9607 | nan | 0.3881 | 0.9872 | nan | 0.3041 | 0.9600 |
0.2968 | 76.67 | 230 | 0.3289 | 0.6344 | 0.6807 | 0.9628 | nan | 0.3712 | 0.9902 | nan | 0.3066 | 0.9622 |
0.4415 | 80.0 | 240 | 0.3333 | 0.6320 | 0.6800 | 0.9622 | nan | 0.3705 | 0.9896 | nan | 0.3024 | 0.9615 |
0.2836 | 83.33 | 250 | 0.3271 | 0.6287 | 0.6757 | 0.9619 | nan | 0.3617 | 0.9897 | nan | 0.2960 | 0.9613 |
0.2762 | 86.67 | 260 | 0.3203 | 0.6263 | 0.6673 | 0.9629 | nan | 0.3429 | 0.9916 | nan | 0.2903 | 0.9623 |
0.3901 | 90.0 | 270 | 0.3186 | 0.6290 | 0.6787 | 0.9614 | nan | 0.3685 | 0.9889 | nan | 0.2971 | 0.9608 |
0.2755 | 93.33 | 280 | 0.3086 | 0.6283 | 0.6693 | 0.9631 | nan | 0.3468 | 0.9917 | nan | 0.2940 | 0.9625 |
0.2652 | 96.67 | 290 | 0.3099 | 0.6302 | 0.6779 | 0.9620 | nan | 0.3661 | 0.9896 | nan | 0.2991 | 0.9614 |
0.2627 | 100.0 | 300 | 0.3056 | 0.6294 | 0.6728 | 0.9627 | nan | 0.3548 | 0.9909 | nan | 0.2966 | 0.9622 |
0.2647 | 103.33 | 310 | 0.3036 | 0.6292 | 0.6689 | 0.9635 | nan | 0.3458 | 0.9921 | nan | 0.2954 | 0.9629 |
0.2697 | 106.67 | 320 | 0.3043 | 0.6298 | 0.6713 | 0.9632 | nan | 0.3510 | 0.9916 | nan | 0.2970 | 0.9626 |
0.3878 | 110.0 | 330 | 0.3037 | 0.6297 | 0.6740 | 0.9626 | nan | 0.3573 | 0.9907 | nan | 0.2973 | 0.9620 |
0.2521 | 113.33 | 340 | 0.3013 | 0.6300 | 0.6714 | 0.9633 | nan | 0.3513 | 0.9916 | nan | 0.2974 | 0.9627 |
0.2663 | 116.67 | 350 | 0.3060 | 0.6298 | 0.6766 | 0.9621 | nan | 0.3634 | 0.9899 | nan | 0.2981 | 0.9615 |
0.2507 | 120.0 | 360 | 0.3032 | 0.6301 | 0.6710 | 0.9634 | nan | 0.3502 | 0.9918 | nan | 0.2973 | 0.9628 |
Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
- Downloads last month
- 145
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support