WKLI22 commited on
Commit
c7b586d
·
verified ·
1 Parent(s): 5fabbb8

End of training

Browse files
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [WKLI22/detr-resnet-50_finetuned_cppe5](https://huggingface.co/WKLI22/detr-resnet-50_finetuned_cppe5) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.5523
19
 
20
  ## Model description
21
 
@@ -35,11 +35,11 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
- - train_batch_size: 16
39
- - eval_batch_size: 16
40
  - seed: 42
41
  - gradient_accumulation_steps: 6
42
- - total_train_batch_size: 96
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 1
@@ -49,20 +49,20 @@ The following hyperparameters were used during training:
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
- | 0.504 | 0.07 | 2 | 0.5922 |
53
- | 0.557 | 0.13 | 4 | 0.5807 |
54
- | 0.5073 | 0.2 | 6 | 0.5912 |
55
- | 0.5733 | 0.27 | 8 | 0.5871 |
56
- | 0.5552 | 0.34 | 10 | 0.5620 |
57
- | 0.5073 | 0.4 | 12 | 0.5599 |
58
- | 0.6053 | 0.47 | 14 | 0.5825 |
59
- | 0.5413 | 0.54 | 16 | 0.5558 |
60
- | 0.5741 | 0.6 | 18 | 0.5606 |
61
- | 0.5522 | 0.67 | 20 | 0.5415 |
62
- | 0.5498 | 0.74 | 22 | 0.5369 |
63
- | 0.485 | 0.8 | 24 | 0.5485 |
64
- | 0.5658 | 0.87 | 26 | 0.5448 |
65
- | 0.5519 | 0.94 | 28 | 0.5523 |
66
 
67
 
68
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [WKLI22/detr-resnet-50_finetuned_cppe5](https://huggingface.co/WKLI22/detr-resnet-50_finetuned_cppe5) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.5201
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 1e-05
38
+ - train_batch_size: 17
39
+ - eval_batch_size: 17
40
  - seed: 42
41
  - gradient_accumulation_steps: 6
42
+ - total_train_batch_size: 102
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 1
 
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
+ | 0.4924 | 0.07 | 2 | 0.5583 |
53
+ | 0.5373 | 0.14 | 4 | 0.5413 |
54
+ | 0.5166 | 0.21 | 6 | 0.5306 |
55
+ | 0.5765 | 0.28 | 8 | 0.5416 |
56
+ | 0.5405 | 0.36 | 10 | 0.5450 |
57
+ | 0.5289 | 0.43 | 12 | 0.5315 |
58
+ | 0.5312 | 0.5 | 14 | 0.5215 |
59
+ | 0.537 | 0.57 | 16 | 0.5313 |
60
+ | 0.5417 | 0.64 | 18 | 0.5255 |
61
+ | 0.5214 | 0.71 | 20 | 0.5251 |
62
+ | 0.5143 | 0.78 | 22 | 0.5309 |
63
+ | 0.5326 | 0.85 | 24 | 0.5269 |
64
+ | 0.5289 | 0.92 | 26 | 0.5310 |
65
+ | 0.5524 | 0.99 | 28 | 0.5201 |
66
 
67
 
68
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8e92a618a8f592571c815a881d3d49b1c4144e39cdbfb4417b87051c4dcc8ca5
3
  size 166494824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:06004c1276d0661651d88e4d92e3fb788b5b8f66c6ff3a5c83b9b1a3ab6fa72c
3
  size 166494824
runs/Apr14_20-12-21_c24f0b7a0a52/events.out.tfevents.1713125555.c24f0b7a0a52.4475.2 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6ebae1fda755f265a9027a880f5fb5410c4152b7e9aafbbb8f665c6b2d2c2368
3
- size 10924
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7a74a9b2c879663eeb7a41cbfd9b44a60de8228954b1d6e8adb713e164f2c19
3
+ size 12218