sawthiha commited on
Commit
b37398e
1 Parent(s): ef28794

End of training

Browse files
Files changed (4) hide show
  1. README.md +59 -20
  2. config.json +1 -1
  3. model.safetensors +1 -1
  4. training_args.bin +1 -1
README.md CHANGED
@@ -8,15 +8,6 @@ tags:
8
  model-index:
9
  - name: segformer-b0-finetuned-deprem-satellite
10
  results: []
11
- widget:
12
- - src: >-
13
- https://datasets-server.huggingface.co/assets/deprem-ml/deprem_satellite_semantic_whu_dataset/--/default/train/3/image/image.jpg
14
- example_title: Example 1
15
- - src: >-
16
- https://datasets-server.huggingface.co/assets/deprem-ml/deprem_satellite_semantic_whu_dataset/--/default/train/9/image/image.jpg
17
- example_title: Example 2
18
- datasets:
19
- - deprem-ml/deprem_satellite_semantic_whu_dataset
20
  ---
21
 
22
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -26,12 +17,7 @@ should probably proofread and complete it, then remove this comment. -->
26
 
27
  This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the deprem-ml/deprem_satellite_semantic_whu_dataset dataset.
28
  It achieves the following results on the evaluation set:
29
- - eval_loss: 0.0849
30
- - eval_runtime: 64.056
31
- - eval_samples_per_second: 16.173
32
- - eval_steps_per_second: 4.043
33
- - epoch: 4.18
34
- - step: 3960
35
 
36
  ## Model description
37
 
@@ -50,17 +36,70 @@ More information needed
50
  ### Training hyperparameters
51
 
52
  The following hyperparameters were used during training:
53
- - learning_rate: 5e-05
54
- - train_batch_size: 5
55
- - eval_batch_size: 4
56
  - seed: 42
57
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
  - lr_scheduler_type: linear
59
- - num_epochs: 50
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.36.2
64
  - Pytorch 2.1.2
65
  - Datasets 2.16.1
66
- - Tokenizers 0.15.0
 
8
  model-index:
9
  - name: segformer-b0-finetuned-deprem-satellite
10
  results: []
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the deprem-ml/deprem_satellite_semantic_whu_dataset dataset.
19
  It achieves the following results on the evaluation set:
20
+ - Loss: 0.0685
 
 
 
 
 
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 7e-05
40
+ - train_batch_size: 10
41
+ - eval_batch_size: 5
42
  - seed: 42
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - num_epochs: 2
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 0.0956 | 0.04 | 20 | 0.0717 |
52
+ | 0.1669 | 0.08 | 40 | 0.0708 |
53
+ | 0.073 | 0.13 | 60 | 0.0722 |
54
+ | 0.1258 | 0.17 | 80 | 0.0715 |
55
+ | 0.1167 | 0.21 | 100 | 0.0719 |
56
+ | 0.1157 | 0.25 | 120 | 0.0709 |
57
+ | 0.1373 | 0.3 | 140 | 0.0709 |
58
+ | 0.0749 | 0.34 | 160 | 0.0707 |
59
+ | 0.1033 | 0.38 | 180 | 0.0701 |
60
+ | 0.1277 | 0.42 | 200 | 0.0702 |
61
+ | 0.0979 | 0.46 | 220 | 0.0703 |
62
+ | 0.0959 | 0.51 | 240 | 0.0698 |
63
+ | 0.1111 | 0.55 | 260 | 0.0700 |
64
+ | 0.1389 | 0.59 | 280 | 0.0695 |
65
+ | 0.1247 | 0.63 | 300 | 0.0697 |
66
+ | 0.1385 | 0.68 | 320 | 0.0694 |
67
+ | 0.083 | 0.72 | 340 | 0.0694 |
68
+ | 0.1398 | 0.76 | 360 | 0.0694 |
69
+ | 0.1268 | 0.8 | 380 | 0.0694 |
70
+ | 0.1256 | 0.84 | 400 | 0.0692 |
71
+ | 0.0801 | 0.89 | 420 | 0.0693 |
72
+ | 0.1508 | 0.93 | 440 | 0.0691 |
73
+ | 0.1229 | 0.97 | 460 | 0.0692 |
74
+ | 0.0825 | 1.01 | 480 | 0.0693 |
75
+ | 0.1465 | 1.05 | 500 | 0.0692 |
76
+ | 0.1086 | 1.1 | 520 | 0.0693 |
77
+ | 0.1679 | 1.14 | 540 | 0.0692 |
78
+ | 0.138 | 1.18 | 560 | 0.0693 |
79
+ | 0.1356 | 1.22 | 580 | 0.0689 |
80
+ | 0.0822 | 1.27 | 600 | 0.0690 |
81
+ | 0.1235 | 1.31 | 620 | 0.0689 |
82
+ | 0.0983 | 1.35 | 640 | 0.0688 |
83
+ | 0.1063 | 1.39 | 660 | 0.0689 |
84
+ | 0.111 | 1.43 | 680 | 0.0689 |
85
+ | 0.149 | 1.48 | 700 | 0.0692 |
86
+ | 0.0952 | 1.52 | 720 | 0.0688 |
87
+ | 0.1263 | 1.56 | 740 | 0.0687 |
88
+ | 0.1124 | 1.6 | 760 | 0.0686 |
89
+ | 0.1366 | 1.65 | 780 | 0.0688 |
90
+ | 0.1222 | 1.69 | 800 | 0.0688 |
91
+ | 0.1499 | 1.73 | 820 | 0.0686 |
92
+ | 0.1285 | 1.77 | 840 | 0.0686 |
93
+ | 0.1176 | 1.81 | 860 | 0.0687 |
94
+ | 0.1234 | 1.86 | 880 | 0.0685 |
95
+ | 0.0878 | 1.9 | 900 | 0.0685 |
96
+ | 0.1267 | 1.94 | 920 | 0.0685 |
97
+ | 0.1274 | 1.98 | 940 | 0.0685 |
98
+
99
 
100
  ### Framework versions
101
 
102
  - Transformers 4.36.2
103
  - Pytorch 2.1.2
104
  - Datasets 2.16.1
105
+ - Tokenizers 0.15.0
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "nvidia/segformer-b0-finetuned-ade-512-512",
3
  "architectures": [
4
  "SegformerForSemanticSegmentation"
5
  ],
 
1
  {
2
+ "_name_or_path": "segformer-b0-finetuned-deprem-satellite/checkpoint-3920",
3
  "architectures": [
4
  "SegformerForSemanticSegmentation"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bfe3c78af33b55e3899e0632e124223ab0602f9fa7c9298bcacfcd653fa38275
3
  size 14884776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4111c72a1cf2b7a62cd778808e094ea8c71f34f7c204ee5046d25755fbde3b1b
3
  size 14884776
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:014646daa30947a2a5e78b514576ccb3f04faed51db92120b962d4c9ba2c81aa
3
  size 4728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5eb25bf7f25ff7446bb94eeff02b4cee799b103daee45f50a3a27494e34745d3
3
  size 4728