Akshay0706 commited on
Commit
77f81a3
·
1 Parent(s): 75eb070

End of training

Browse files
README.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - imagefolder
8
+ metrics:
9
+ - accuracy
10
+ - f1
11
+ model-index:
12
+ - name: Rice-Plant-50-Epochs-Model
13
+ results:
14
+ - task:
15
+ name: Image Classification
16
+ type: image-classification
17
+ dataset:
18
+ name: imagefolder
19
+ type: imagefolder
20
+ config: default
21
+ split: train
22
+ args: default
23
+ metrics:
24
+ - name: Accuracy
25
+ type: accuracy
26
+ value: 0.9688473520249221
27
+ - name: F1
28
+ type: f1
29
+ value: 0.9686087085518211
30
+ ---
31
+
32
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
33
+ should probably proofread and complete it, then remove this comment. -->
34
+
35
+ # Rice-Plant-50-Epochs-Model
36
+
37
+ This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
38
+ It achieves the following results on the evaluation set:
39
+ - Loss: 0.1649
40
+ - Accuracy: 0.9688
41
+ - F1: 0.9686
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 2e-05
61
+ - train_batch_size: 16
62
+ - eval_batch_size: 16
63
+ - seed: 42
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - num_epochs: 50
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
71
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
72
+ | 1.0399 | 1.0 | 115 | 0.6185 | 0.8910 | 0.8933 |
73
+ | 0.3392 | 2.0 | 230 | 0.2849 | 0.9502 | 0.9497 |
74
+ | 0.1633 | 3.0 | 345 | 0.2230 | 0.9439 | 0.9440 |
75
+ | 0.104 | 4.0 | 460 | 0.2022 | 0.9502 | 0.9495 |
76
+ | 0.0828 | 5.0 | 575 | 0.2081 | 0.9408 | 0.9406 |
77
+ | 0.0603 | 6.0 | 690 | 0.2301 | 0.9408 | 0.9403 |
78
+ | 0.0513 | 7.0 | 805 | 0.1704 | 0.9595 | 0.9593 |
79
+ | 0.042 | 8.0 | 920 | 0.1587 | 0.9626 | 0.9626 |
80
+ | 0.0356 | 9.0 | 1035 | 0.1606 | 0.9626 | 0.9625 |
81
+ | 0.0299 | 10.0 | 1150 | 0.1608 | 0.9657 | 0.9656 |
82
+ | 0.0262 | 11.0 | 1265 | 0.1553 | 0.9626 | 0.9625 |
83
+ | 0.0232 | 12.0 | 1380 | 0.1582 | 0.9657 | 0.9656 |
84
+ | 0.0207 | 13.0 | 1495 | 0.1588 | 0.9657 | 0.9656 |
85
+ | 0.0186 | 14.0 | 1610 | 0.1618 | 0.9657 | 0.9656 |
86
+ | 0.0168 | 15.0 | 1725 | 0.1618 | 0.9657 | 0.9656 |
87
+ | 0.0152 | 16.0 | 1840 | 0.1639 | 0.9657 | 0.9656 |
88
+ | 0.0139 | 17.0 | 1955 | 0.1649 | 0.9688 | 0.9686 |
89
+ | 0.0127 | 18.0 | 2070 | 0.1676 | 0.9657 | 0.9656 |
90
+ | 0.0117 | 19.0 | 2185 | 0.1688 | 0.9688 | 0.9686 |
91
+ | 0.0108 | 20.0 | 2300 | 0.1710 | 0.9626 | 0.9622 |
92
+ | 0.01 | 21.0 | 2415 | 0.1723 | 0.9657 | 0.9654 |
93
+ | 0.0093 | 22.0 | 2530 | 0.1739 | 0.9657 | 0.9654 |
94
+ | 0.0087 | 23.0 | 2645 | 0.1758 | 0.9626 | 0.9622 |
95
+ | 0.0081 | 24.0 | 2760 | 0.1776 | 0.9626 | 0.9622 |
96
+ | 0.0076 | 25.0 | 2875 | 0.1777 | 0.9657 | 0.9654 |
97
+ | 0.0071 | 26.0 | 2990 | 0.1792 | 0.9657 | 0.9654 |
98
+ | 0.0067 | 27.0 | 3105 | 0.1808 | 0.9657 | 0.9654 |
99
+ | 0.0063 | 28.0 | 3220 | 0.1822 | 0.9657 | 0.9654 |
100
+ | 0.006 | 29.0 | 3335 | 0.1834 | 0.9657 | 0.9654 |
101
+ | 0.0057 | 30.0 | 3450 | 0.1840 | 0.9657 | 0.9654 |
102
+ | 0.0054 | 31.0 | 3565 | 0.1855 | 0.9657 | 0.9654 |
103
+ | 0.0051 | 32.0 | 3680 | 0.1868 | 0.9657 | 0.9654 |
104
+ | 0.0049 | 33.0 | 3795 | 0.1877 | 0.9657 | 0.9654 |
105
+ | 0.0047 | 34.0 | 3910 | 0.1892 | 0.9657 | 0.9654 |
106
+ | 0.0045 | 35.0 | 4025 | 0.1900 | 0.9657 | 0.9654 |
107
+ | 0.0043 | 36.0 | 4140 | 0.1914 | 0.9657 | 0.9654 |
108
+ | 0.0042 | 37.0 | 4255 | 0.1919 | 0.9657 | 0.9654 |
109
+ | 0.004 | 38.0 | 4370 | 0.1929 | 0.9657 | 0.9654 |
110
+ | 0.0039 | 39.0 | 4485 | 0.1938 | 0.9657 | 0.9654 |
111
+ | 0.0037 | 40.0 | 4600 | 0.1953 | 0.9657 | 0.9654 |
112
+ | 0.0036 | 41.0 | 4715 | 0.1956 | 0.9657 | 0.9654 |
113
+ | 0.0035 | 42.0 | 4830 | 0.1965 | 0.9657 | 0.9654 |
114
+ | 0.0035 | 43.0 | 4945 | 0.1974 | 0.9657 | 0.9654 |
115
+ | 0.0034 | 44.0 | 5060 | 0.1981 | 0.9657 | 0.9654 |
116
+ | 0.0033 | 45.0 | 5175 | 0.1984 | 0.9657 | 0.9654 |
117
+ | 0.0032 | 46.0 | 5290 | 0.1986 | 0.9657 | 0.9654 |
118
+ | 0.0032 | 47.0 | 5405 | 0.1989 | 0.9657 | 0.9654 |
119
+ | 0.0032 | 48.0 | 5520 | 0.1993 | 0.9657 | 0.9654 |
120
+ | 0.0031 | 49.0 | 5635 | 0.1993 | 0.9657 | 0.9654 |
121
+ | 0.0031 | 50.0 | 5750 | 0.1993 | 0.9657 | 0.9654 |
122
+
123
+
124
+ ### Framework versions
125
+
126
+ - Transformers 4.35.0
127
+ - Pytorch 2.1.0+cu118
128
+ - Datasets 2.14.6
129
+ - Tokenizers 0.14.1
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 50.0,
3
+ "total_flos": 7.083021773205504e+18,
4
+ "train_loss": 0.04543054695233055,
5
+ "train_runtime": 7193.0527,
6
+ "train_samples": 1828,
7
+ "train_samples_per_second": 12.707,
8
+ "train_steps_per_second": 0.799
9
+ }
config.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "google/vit-base-patch16-224-in21k",
3
+ "architectures": [
4
+ "ViTForImageClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "encoder_stride": 16,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.0,
10
+ "hidden_size": 768,
11
+ "id2label": {
12
+ "0": 0,
13
+ "1": 1,
14
+ "2": 2,
15
+ "3": 3,
16
+ "4": 4,
17
+ "5": 5
18
+ },
19
+ "image_size": 224,
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 3072,
22
+ "label2id": {
23
+ "0": 0,
24
+ "1": 1,
25
+ "2": 2,
26
+ "3": 3,
27
+ "4": 4,
28
+ "5": 5
29
+ },
30
+ "layer_norm_eps": 1e-12,
31
+ "model_type": "vit",
32
+ "num_attention_heads": 12,
33
+ "num_channels": 3,
34
+ "num_hidden_layers": 12,
35
+ "patch_size": 16,
36
+ "problem_type": "single_label_classification",
37
+ "qkv_bias": true,
38
+ "torch_dtype": "float32",
39
+ "transformers_version": "4.35.0"
40
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:191a6be8d14a698a0804d0767e9dc832af003d14274dca3b0ec4aa32d8f7d3dc
3
+ size 343236280
preprocessor_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "do_rescale": true,
4
+ "do_resize": true,
5
+ "image_mean": [
6
+ 0.5,
7
+ 0.5,
8
+ 0.5
9
+ ],
10
+ "image_processor_type": "ViTImageProcessor",
11
+ "image_std": [
12
+ 0.5,
13
+ 0.5,
14
+ 0.5
15
+ ],
16
+ "resample": 2,
17
+ "rescale_factor": 0.00392156862745098,
18
+ "size": {
19
+ "height": 224,
20
+ "width": 224
21
+ }
22
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 50.0,
3
+ "total_flos": 7.083021773205504e+18,
4
+ "train_loss": 0.04543054695233055,
5
+ "train_runtime": 7193.0527,
6
+ "train_samples": 1828,
7
+ "train_samples_per_second": 12.707,
8
+ "train_steps_per_second": 0.799
9
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbb048af45d2d49c04327edd7c54665aeccb12a482f28c0f560409a5247556a0
3
+ size 4536