ongkn commited on
Commit
79d935d
1 Parent(s): 438a1ff

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -40
README.md CHANGED
@@ -5,24 +5,9 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - imagefolder
8
- metrics:
9
- - accuracy
10
  model-index:
11
  - name: attraction-classifier
12
- results:
13
- - task:
14
- name: Image Classification
15
- type: image-classification
16
- dataset:
17
- name: imagefolder
18
- type: imagefolder
19
- config: default
20
- split: train
21
- args: default
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.8389955686853766
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,9 +16,6 @@ should probably proofread and complete it, then remove this comment. -->
31
  # attraction-classifier
32
 
33
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
- It achieves the following results on the evaluation set:
35
- - Loss: 0.3983
36
- - Accuracy: 0.8390
37
 
38
  ## Model description
39
 
@@ -55,7 +37,7 @@ The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
  - train_batch_size: 16
57
  - eval_batch_size: 16
58
- - seed: 42
59
  - gradient_accumulation_steps: 4
60
  - total_train_batch_size: 64
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -63,25 +45,9 @@ The following hyperparameters were used during training:
63
  - lr_scheduler_warmup_ratio: 0.1
64
  - num_epochs: 10
65
 
66
- ### Training results
67
-
68
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 0.5745 | 0.99 | 42 | 0.5208 | 0.7829 |
71
- | 0.4617 | 2.0 | 85 | 0.4346 | 0.8065 |
72
- | 0.4245 | 2.99 | 127 | 0.4151 | 0.8346 |
73
- | 0.3512 | 4.0 | 170 | 0.3854 | 0.8508 |
74
- | 0.3146 | 4.99 | 212 | 0.4062 | 0.8360 |
75
- | 0.3235 | 6.0 | 255 | 0.3864 | 0.8390 |
76
- | 0.2699 | 6.99 | 297 | 0.4094 | 0.8508 |
77
- | 0.3049 | 8.0 | 340 | 0.3735 | 0.8567 |
78
- | 0.2459 | 8.99 | 382 | 0.4037 | 0.8360 |
79
- | 0.2277 | 9.88 | 420 | 0.3983 | 0.8390 |
80
-
81
-
82
  ### Framework versions
83
 
84
- - Transformers 4.35.2
85
- - Pytorch 2.1.1+cu121
86
- - Datasets 2.15.0
87
- - Tokenizers 0.15.0
 
5
  - generated_from_trainer
6
  datasets:
7
  - imagefolder
 
 
8
  model-index:
9
  - name: attraction-classifier
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
  # attraction-classifier
17
 
18
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
 
 
 
19
 
20
  ## Model description
21
 
 
37
  - learning_rate: 5e-05
38
  - train_batch_size: 16
39
  - eval_batch_size: 16
40
+ - seed: 69
41
  - gradient_accumulation_steps: 4
42
  - total_train_batch_size: 64
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
45
  - lr_scheduler_warmup_ratio: 0.1
46
  - num_epochs: 10
47
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.31.0
51
+ - Pytorch 2.0.1+cu117
52
+ - Datasets 2.12.0
53
+ - Tokenizers 0.13.3