MarkGG commited on
Commit
4095a7d
·
1 Parent(s): c29b7ac

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -43
README.md CHANGED
@@ -14,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 5.0391
18
 
19
  ## Model description
20
 
@@ -42,58 +42,68 @@ The following hyperparameters were used during training:
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_steps: 1000
45
- - num_epochs: 40
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
- | No log | 0.96 | 16 | 10.2947 |
53
- | No log | 1.96 | 32 | 9.5359 |
54
- | No log | 2.96 | 48 | 9.0840 |
55
- | No log | 3.96 | 64 | 8.7926 |
56
- | No log | 4.96 | 80 | 8.4575 |
57
- | No log | 5.96 | 96 | 8.2113 |
58
- | No log | 6.96 | 112 | 7.9965 |
59
- | No log | 7.96 | 128 | 7.7896 |
60
- | No log | 8.96 | 144 | 7.5764 |
61
- | No log | 9.96 | 160 | 7.3858 |
62
- | No log | 10.96 | 176 | 7.2132 |
63
- | No log | 11.96 | 192 | 7.0529 |
64
- | No log | 12.96 | 208 | 6.8885 |
65
- | No log | 13.96 | 224 | 6.7307 |
66
- | No log | 14.96 | 240 | 6.5821 |
67
- | No log | 15.96 | 256 | 6.4306 |
68
- | No log | 16.96 | 272 | 6.2825 |
69
- | No log | 17.96 | 288 | 6.1330 |
70
- | No log | 18.96 | 304 | 5.9928 |
71
- | No log | 19.96 | 320 | 5.8703 |
72
- | No log | 20.96 | 336 | 5.7581 |
73
- | No log | 21.96 | 352 | 5.6566 |
74
- | No log | 22.96 | 368 | 5.5700 |
75
- | No log | 23.96 | 384 | 5.4981 |
76
- | No log | 24.96 | 400 | 5.4298 |
77
- | No log | 25.96 | 416 | 5.3661 |
78
- | No log | 26.96 | 432 | 5.3182 |
79
- | No log | 27.96 | 448 | 5.2724 |
80
- | No log | 28.96 | 464 | 5.2366 |
81
- | No log | 29.96 | 480 | 5.2059 |
82
- | No log | 30.96 | 496 | 5.1780 |
83
- | No log | 31.96 | 512 | 5.1552 |
84
- | No log | 32.96 | 528 | 5.1322 |
85
- | No log | 33.96 | 544 | 5.1150 |
86
- | No log | 34.96 | 560 | 5.0921 |
87
- | No log | 35.96 | 576 | 5.0888 |
88
- | No log | 36.96 | 592 | 5.0685 |
89
- | No log | 37.96 | 608 | 5.0527 |
90
- | No log | 38.96 | 624 | 5.0460 |
91
- | No log | 39.96 | 640 | 5.0391 |
 
 
 
 
 
 
 
 
 
 
92
 
93
 
94
  ### Framework versions
95
 
96
- - Transformers 4.23.1
97
  - Pytorch 1.12.1+cu113
98
  - Datasets 2.6.1
99
  - Tokenizers 0.13.1
 
14
 
15
  This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 5.0319
18
 
19
  ## Model description
20
 
 
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: cosine
44
  - lr_scheduler_warmup_steps: 1000
45
+ - num_epochs: 50
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:-----:|:----:|:---------------:|
52
+ | No log | 0.96 | 16 | 10.3553 |
53
+ | No log | 1.96 | 32 | 9.5625 |
54
+ | No log | 2.96 | 48 | 9.0898 |
55
+ | No log | 3.96 | 64 | 8.7852 |
56
+ | No log | 4.96 | 80 | 8.4694 |
57
+ | No log | 5.96 | 96 | 8.2122 |
58
+ | No log | 6.96 | 112 | 8.0040 |
59
+ | No log | 7.96 | 128 | 7.8029 |
60
+ | No log | 8.96 | 144 | 7.5950 |
61
+ | No log | 9.96 | 160 | 7.4081 |
62
+ | No log | 10.96 | 176 | 7.2391 |
63
+ | No log | 11.96 | 192 | 7.0784 |
64
+ | No log | 12.96 | 208 | 6.9139 |
65
+ | No log | 13.96 | 224 | 6.7530 |
66
+ | No log | 14.96 | 240 | 6.5983 |
67
+ | No log | 15.96 | 256 | 6.4403 |
68
+ | No log | 16.96 | 272 | 6.3025 |
69
+ | No log | 17.96 | 288 | 6.1562 |
70
+ | No log | 18.96 | 304 | 6.0147 |
71
+ | No log | 19.96 | 320 | 5.8919 |
72
+ | No log | 20.96 | 336 | 5.7709 |
73
+ | No log | 21.96 | 352 | 5.6666 |
74
+ | No log | 22.96 | 368 | 5.5818 |
75
+ | No log | 23.96 | 384 | 5.5051 |
76
+ | No log | 24.96 | 400 | 5.4356 |
77
+ | No log | 25.96 | 416 | 5.3788 |
78
+ | No log | 26.96 | 432 | 5.3230 |
79
+ | No log | 27.96 | 448 | 5.2823 |
80
+ | No log | 28.96 | 464 | 5.2513 |
81
+ | No log | 29.96 | 480 | 5.2218 |
82
+ | No log | 30.96 | 496 | 5.1910 |
83
+ | No log | 31.96 | 512 | 5.1609 |
84
+ | No log | 32.96 | 528 | 5.1500 |
85
+ | No log | 33.96 | 544 | 5.1268 |
86
+ | No log | 34.96 | 560 | 5.1012 |
87
+ | No log | 35.96 | 576 | 5.0973 |
88
+ | No log | 36.96 | 592 | 5.0769 |
89
+ | No log | 37.96 | 608 | 5.0653 |
90
+ | No log | 38.96 | 624 | 5.0489 |
91
+ | No log | 39.96 | 640 | 5.0458 |
92
+ | No log | 40.96 | 656 | 5.0379 |
93
+ | No log | 41.96 | 672 | 5.0347 |
94
+ | No log | 42.96 | 688 | 5.0161 |
95
+ | No log | 43.96 | 704 | 5.0226 |
96
+ | No log | 44.96 | 720 | 5.0215 |
97
+ | No log | 45.96 | 736 | 5.0190 |
98
+ | No log | 46.96 | 752 | 5.0087 |
99
+ | No log | 47.96 | 768 | 5.0309 |
100
+ | No log | 48.96 | 784 | 5.0232 |
101
+ | No log | 49.96 | 800 | 5.0319 |
102
 
103
 
104
  ### Framework versions
105
 
106
+ - Transformers 4.24.0
107
  - Pytorch 1.12.1+cu113
108
  - Datasets 2.6.1
109
  - Tokenizers 0.13.1