End of training
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
This model is a fine-tuned version of [WKLI22/detr-resnet-50_finetuned_cppe5](https://huggingface.co/WKLI22/detr-resnet-50_finetuned_cppe5) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 0.
|
19 |
|
20 |
## Model description
|
21 |
|
@@ -35,88 +35,39 @@ More information needed
|
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
- learning_rate: 1e-05
|
38 |
-
- train_batch_size:
|
39 |
-
- eval_batch_size:
|
40 |
- seed: 42
|
41 |
-
- gradient_accumulation_steps:
|
42 |
-
- total_train_batch_size:
|
43 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
44 |
- lr_scheduler_type: linear
|
45 |
-
- num_epochs:
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
### Training results
|
49 |
|
50 |
| Training Loss | Epoch | Step | Validation Loss |
|
51 |
|:-------------:|:-----:|:----:|:---------------:|
|
52 |
-
| 0.
|
53 |
-
| 0.
|
54 |
-
| 0.
|
55 |
-
| 0.
|
56 |
-
| 0.
|
57 |
-
| 0.
|
58 |
-
| 0.
|
59 |
-
| 0.
|
60 |
-
| 0.
|
61 |
-
| 0.
|
62 |
-
| 0.
|
63 |
-
| 0.
|
64 |
-
| 0.
|
65 |
-
| 0.
|
66 |
-
| 0.404 | 2.38 | 150 | 0.3428 |
|
67 |
-
| 0.3519 | 2.54 | 160 | 0.3375 |
|
68 |
-
| 0.3647 | 2.7 | 170 | 0.3352 |
|
69 |
-
| 0.3669 | 2.86 | 180 | 0.3509 |
|
70 |
-
| 0.3695 | 3.02 | 190 | 0.3452 |
|
71 |
-
| 0.341 | 3.17 | 200 | 0.3614 |
|
72 |
-
| 0.3798 | 3.33 | 210 | 0.3589 |
|
73 |
-
| 0.3421 | 3.49 | 220 | 0.3646 |
|
74 |
-
| 0.3541 | 3.65 | 230 | 0.3562 |
|
75 |
-
| 0.4168 | 3.81 | 240 | 0.3584 |
|
76 |
-
| 0.3423 | 3.97 | 250 | 0.3508 |
|
77 |
-
| 0.3548 | 4.13 | 260 | 0.3339 |
|
78 |
-
| 0.3854 | 4.29 | 270 | 0.3424 |
|
79 |
-
| 0.3435 | 4.44 | 280 | 0.3353 |
|
80 |
-
| 0.4037 | 4.6 | 290 | 0.3408 |
|
81 |
-
| 0.3741 | 4.76 | 300 | 0.3317 |
|
82 |
-
| 0.3454 | 4.92 | 310 | 0.3112 |
|
83 |
-
| 0.3717 | 5.08 | 320 | 0.3211 |
|
84 |
-
| 0.3695 | 5.24 | 330 | 0.3424 |
|
85 |
-
| 0.3379 | 5.4 | 340 | 0.3321 |
|
86 |
-
| 0.3516 | 5.56 | 350 | 0.3441 |
|
87 |
-
| 0.3672 | 5.71 | 360 | 0.3307 |
|
88 |
-
| 0.3842 | 5.87 | 370 | 0.3414 |
|
89 |
-
| 0.3385 | 6.03 | 380 | 0.3386 |
|
90 |
-
| 0.3613 | 6.19 | 390 | 0.3248 |
|
91 |
-
| 0.3542 | 6.35 | 400 | 0.3217 |
|
92 |
-
| 0.3509 | 6.51 | 410 | 0.3180 |
|
93 |
-
| 0.3532 | 6.67 | 420 | 0.3217 |
|
94 |
-
| 0.3426 | 6.83 | 430 | 0.3393 |
|
95 |
-
| 0.3476 | 6.98 | 440 | 0.3400 |
|
96 |
-
| 0.3384 | 7.14 | 450 | 0.3334 |
|
97 |
-
| 0.3568 | 7.3 | 460 | 0.3300 |
|
98 |
-
| 0.3253 | 7.46 | 470 | 0.3414 |
|
99 |
-
| 0.3453 | 7.62 | 480 | 0.3367 |
|
100 |
-
| 0.3507 | 7.78 | 490 | 0.3340 |
|
101 |
-
| 0.3198 | 7.94 | 500 | 0.3213 |
|
102 |
-
| 0.3121 | 8.1 | 510 | 0.3448 |
|
103 |
-
| 0.3492 | 8.25 | 520 | 0.3426 |
|
104 |
-
| 0.3382 | 8.41 | 530 | 0.3392 |
|
105 |
-
| 0.3498 | 8.57 | 540 | 0.3433 |
|
106 |
-
| 0.3504 | 8.73 | 550 | 0.3520 |
|
107 |
-
| 0.3255 | 8.89 | 560 | 0.3370 |
|
108 |
-
| 0.3294 | 9.05 | 570 | 0.3390 |
|
109 |
-
| 0.3325 | 9.21 | 580 | 0.3392 |
|
110 |
-
| 0.3304 | 9.37 | 590 | 0.3358 |
|
111 |
-
| 0.3393 | 9.52 | 600 | 0.3415 |
|
112 |
-
| 0.3198 | 9.68 | 610 | 0.3388 |
|
113 |
-
| 0.3576 | 9.84 | 620 | 0.3352 |
|
114 |
-
| 0.3801 | 10.0 | 630 | 0.3434 |
|
115 |
|
116 |
|
117 |
### Framework versions
|
118 |
|
119 |
-
- Transformers 4.
|
120 |
-
- Pytorch 2.2.
|
121 |
- Datasets 2.18.0
|
122 |
- Tokenizers 0.15.2
|
|
|
15 |
|
16 |
This model is a fine-tuned version of [WKLI22/detr-resnet-50_finetuned_cppe5](https://huggingface.co/WKLI22/detr-resnet-50_finetuned_cppe5) on an unknown dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- Loss: 0.5523
|
19 |
|
20 |
## Model description
|
21 |
|
|
|
35 |
|
36 |
The following hyperparameters were used during training:
|
37 |
- learning_rate: 1e-05
|
38 |
+
- train_batch_size: 16
|
39 |
+
- eval_batch_size: 16
|
40 |
- seed: 42
|
41 |
+
- gradient_accumulation_steps: 6
|
42 |
+
- total_train_batch_size: 96
|
43 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
44 |
- lr_scheduler_type: linear
|
45 |
+
- num_epochs: 1
|
46 |
- mixed_precision_training: Native AMP
|
47 |
|
48 |
### Training results
|
49 |
|
50 |
| Training Loss | Epoch | Step | Validation Loss |
|
51 |
|:-------------:|:-----:|:----:|:---------------:|
|
52 |
+
| 0.504 | 0.07 | 2 | 0.5922 |
|
53 |
+
| 0.557 | 0.13 | 4 | 0.5807 |
|
54 |
+
| 0.5073 | 0.2 | 6 | 0.5912 |
|
55 |
+
| 0.5733 | 0.27 | 8 | 0.5871 |
|
56 |
+
| 0.5552 | 0.34 | 10 | 0.5620 |
|
57 |
+
| 0.5073 | 0.4 | 12 | 0.5599 |
|
58 |
+
| 0.6053 | 0.47 | 14 | 0.5825 |
|
59 |
+
| 0.5413 | 0.54 | 16 | 0.5558 |
|
60 |
+
| 0.5741 | 0.6 | 18 | 0.5606 |
|
61 |
+
| 0.5522 | 0.67 | 20 | 0.5415 |
|
62 |
+
| 0.5498 | 0.74 | 22 | 0.5369 |
|
63 |
+
| 0.485 | 0.8 | 24 | 0.5485 |
|
64 |
+
| 0.5658 | 0.87 | 26 | 0.5448 |
|
65 |
+
| 0.5519 | 0.94 | 28 | 0.5523 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
|
67 |
|
68 |
### Framework versions
|
69 |
|
70 |
+
- Transformers 4.38.2
|
71 |
+
- Pytorch 2.2.1+cu121
|
72 |
- Datasets 2.18.0
|
73 |
- Tokenizers 0.15.2
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 166494824
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0b0fd4b0fa579e1b595533966ee508882446dd40fb354ce392d4345bacbe4170
|
3 |
size 166494824
|
runs/Apr14_19-59-27_c24f0b7a0a52/events.out.tfevents.1713124771.c24f0b7a0a52.4475.1
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:33496b74387c9c7de67bd7e8551fa1076f447085e2beda4710c09286353fe38d
|
3 |
+
size 12218
|