terry69 commited on
Commit
a22f269
·
verified ·
1 Parent(s): 0f27a43

Model save

Browse files
Files changed (4) hide show
  1. README.md +61 -0
  2. all_results.json +9 -0
  3. train_results.json +9 -0
  4. trainer_state.json +1512 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-v0.1
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - trl
7
+ - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: negative_sent
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # negative_sent
18
+
19
+ This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 1e-05
39
+ - train_batch_size: 16
40
+ - eval_batch_size: 1
41
+ - seed: 42
42
+ - distributed_type: multi-GPU
43
+ - num_devices: 4
44
+ - total_train_batch_size: 64
45
+ - total_eval_batch_size: 4
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 1
50
+
51
+ ### Training results
52
+
53
+
54
+
55
+ ### Framework versions
56
+
57
+ - PEFT 0.13.0
58
+ - Transformers 4.45.1
59
+ - Pytorch 2.4.1+cu121
60
+ - Datasets 3.0.1
61
+ - Tokenizers 0.20.0
all_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1940427569627136.0,
4
+ "train_loss": 1.0122476654034465,
5
+ "train_runtime": 20247.7008,
6
+ "train_samples": 100000,
7
+ "train_samples_per_second": 3.298,
8
+ "train_steps_per_second": 0.052
9
+ }
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "total_flos": 1940427569627136.0,
4
+ "train_loss": 1.0122476654034465,
5
+ "train_runtime": 20247.7008,
6
+ "train_samples": 100000,
7
+ "train_samples_per_second": 3.298,
8
+ "train_steps_per_second": 0.052
9
+ }
trainer_state.json ADDED
@@ -0,0 +1,1512 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 500,
6
+ "global_step": 1044,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0009578544061302681,
13
+ "grad_norm": 0.6597593171619324,
14
+ "learning_rate": 9.523809523809525e-08,
15
+ "loss": 1.1525,
16
+ "step": 1
17
+ },
18
+ {
19
+ "epoch": 0.004789272030651341,
20
+ "grad_norm": 0.6607550915738474,
21
+ "learning_rate": 4.7619047619047623e-07,
22
+ "loss": 1.1632,
23
+ "step": 5
24
+ },
25
+ {
26
+ "epoch": 0.009578544061302681,
27
+ "grad_norm": 0.6985874969645404,
28
+ "learning_rate": 9.523809523809525e-07,
29
+ "loss": 1.1753,
30
+ "step": 10
31
+ },
32
+ {
33
+ "epoch": 0.014367816091954023,
34
+ "grad_norm": 0.6553768135986224,
35
+ "learning_rate": 1.4285714285714286e-06,
36
+ "loss": 1.1488,
37
+ "step": 15
38
+ },
39
+ {
40
+ "epoch": 0.019157088122605363,
41
+ "grad_norm": 0.6087816631352209,
42
+ "learning_rate": 1.904761904761905e-06,
43
+ "loss": 1.1384,
44
+ "step": 20
45
+ },
46
+ {
47
+ "epoch": 0.023946360153256706,
48
+ "grad_norm": 0.5225466109798246,
49
+ "learning_rate": 2.380952380952381e-06,
50
+ "loss": 1.122,
51
+ "step": 25
52
+ },
53
+ {
54
+ "epoch": 0.028735632183908046,
55
+ "grad_norm": 0.5080052683286845,
56
+ "learning_rate": 2.8571428571428573e-06,
57
+ "loss": 1.1505,
58
+ "step": 30
59
+ },
60
+ {
61
+ "epoch": 0.033524904214559385,
62
+ "grad_norm": 0.4341207873557241,
63
+ "learning_rate": 3.3333333333333333e-06,
64
+ "loss": 1.1339,
65
+ "step": 35
66
+ },
67
+ {
68
+ "epoch": 0.038314176245210725,
69
+ "grad_norm": 0.3985137882747194,
70
+ "learning_rate": 3.80952380952381e-06,
71
+ "loss": 1.1282,
72
+ "step": 40
73
+ },
74
+ {
75
+ "epoch": 0.04310344827586207,
76
+ "grad_norm": 0.36938748261120696,
77
+ "learning_rate": 4.2857142857142855e-06,
78
+ "loss": 1.1141,
79
+ "step": 45
80
+ },
81
+ {
82
+ "epoch": 0.04789272030651341,
83
+ "grad_norm": 0.31186688358132403,
84
+ "learning_rate": 4.761904761904762e-06,
85
+ "loss": 1.0914,
86
+ "step": 50
87
+ },
88
+ {
89
+ "epoch": 0.05268199233716475,
90
+ "grad_norm": 0.3279958963323018,
91
+ "learning_rate": 5.2380952380952384e-06,
92
+ "loss": 1.1115,
93
+ "step": 55
94
+ },
95
+ {
96
+ "epoch": 0.05747126436781609,
97
+ "grad_norm": 0.3107640896000187,
98
+ "learning_rate": 5.7142857142857145e-06,
99
+ "loss": 1.0897,
100
+ "step": 60
101
+ },
102
+ {
103
+ "epoch": 0.06226053639846743,
104
+ "grad_norm": 0.3071723661825106,
105
+ "learning_rate": 6.1904761904761914e-06,
106
+ "loss": 1.1045,
107
+ "step": 65
108
+ },
109
+ {
110
+ "epoch": 0.06704980842911877,
111
+ "grad_norm": 0.2936092764163838,
112
+ "learning_rate": 6.666666666666667e-06,
113
+ "loss": 1.0661,
114
+ "step": 70
115
+ },
116
+ {
117
+ "epoch": 0.07183908045977011,
118
+ "grad_norm": 0.28053978486957076,
119
+ "learning_rate": 7.1428571428571436e-06,
120
+ "loss": 1.0887,
121
+ "step": 75
122
+ },
123
+ {
124
+ "epoch": 0.07662835249042145,
125
+ "grad_norm": 0.2891456438800518,
126
+ "learning_rate": 7.61904761904762e-06,
127
+ "loss": 1.0756,
128
+ "step": 80
129
+ },
130
+ {
131
+ "epoch": 0.08141762452107279,
132
+ "grad_norm": 0.24950135731624706,
133
+ "learning_rate": 8.095238095238097e-06,
134
+ "loss": 1.0478,
135
+ "step": 85
136
+ },
137
+ {
138
+ "epoch": 0.08620689655172414,
139
+ "grad_norm": 0.23670461068088686,
140
+ "learning_rate": 8.571428571428571e-06,
141
+ "loss": 1.0389,
142
+ "step": 90
143
+ },
144
+ {
145
+ "epoch": 0.09099616858237548,
146
+ "grad_norm": 0.2507461508986584,
147
+ "learning_rate": 9.047619047619049e-06,
148
+ "loss": 1.0509,
149
+ "step": 95
150
+ },
151
+ {
152
+ "epoch": 0.09578544061302682,
153
+ "grad_norm": 0.22930855800514297,
154
+ "learning_rate": 9.523809523809525e-06,
155
+ "loss": 1.0377,
156
+ "step": 100
157
+ },
158
+ {
159
+ "epoch": 0.10057471264367816,
160
+ "grad_norm": 0.24445927836597864,
161
+ "learning_rate": 1e-05,
162
+ "loss": 1.0506,
163
+ "step": 105
164
+ },
165
+ {
166
+ "epoch": 0.1053639846743295,
167
+ "grad_norm": 0.22388714842997454,
168
+ "learning_rate": 9.999300418283908e-06,
169
+ "loss": 1.0377,
170
+ "step": 110
171
+ },
172
+ {
173
+ "epoch": 0.11015325670498084,
174
+ "grad_norm": 0.2406057364217601,
175
+ "learning_rate": 9.997201868901463e-06,
176
+ "loss": 1.0374,
177
+ "step": 115
178
+ },
179
+ {
180
+ "epoch": 0.11494252873563218,
181
+ "grad_norm": 0.2275701466092952,
182
+ "learning_rate": 9.993704939095376e-06,
183
+ "loss": 1.0663,
184
+ "step": 120
185
+ },
186
+ {
187
+ "epoch": 0.11973180076628352,
188
+ "grad_norm": 0.22200651495287974,
189
+ "learning_rate": 9.988810607420912e-06,
190
+ "loss": 1.0448,
191
+ "step": 125
192
+ },
193
+ {
194
+ "epoch": 0.12452107279693486,
195
+ "grad_norm": 0.22051254365376602,
196
+ "learning_rate": 9.982520243472044e-06,
197
+ "loss": 1.0099,
198
+ "step": 130
199
+ },
200
+ {
201
+ "epoch": 0.12931034482758622,
202
+ "grad_norm": 0.23279527845715264,
203
+ "learning_rate": 9.974835607498224e-06,
204
+ "loss": 1.0212,
205
+ "step": 135
206
+ },
207
+ {
208
+ "epoch": 0.13409961685823754,
209
+ "grad_norm": 0.22813702069156522,
210
+ "learning_rate": 9.965758849911774e-06,
211
+ "loss": 1.023,
212
+ "step": 140
213
+ },
214
+ {
215
+ "epoch": 0.1388888888888889,
216
+ "grad_norm": 0.23555996785615885,
217
+ "learning_rate": 9.955292510686156e-06,
218
+ "loss": 0.9997,
219
+ "step": 145
220
+ },
221
+ {
222
+ "epoch": 0.14367816091954022,
223
+ "grad_norm": 0.2560040654609538,
224
+ "learning_rate": 9.943439518645193e-06,
225
+ "loss": 1.0172,
226
+ "step": 150
227
+ },
228
+ {
229
+ "epoch": 0.14846743295019157,
230
+ "grad_norm": 0.24096780952464245,
231
+ "learning_rate": 9.930203190643491e-06,
232
+ "loss": 0.9876,
233
+ "step": 155
234
+ },
235
+ {
236
+ "epoch": 0.1532567049808429,
237
+ "grad_norm": 0.24193989881098193,
238
+ "learning_rate": 9.915587230638269e-06,
239
+ "loss": 1.0417,
240
+ "step": 160
241
+ },
242
+ {
243
+ "epoch": 0.15804597701149425,
244
+ "grad_norm": 0.24788380685714667,
245
+ "learning_rate": 9.899595728652883e-06,
246
+ "loss": 1.0332,
247
+ "step": 165
248
+ },
249
+ {
250
+ "epoch": 0.16283524904214558,
251
+ "grad_norm": 0.2570877531631883,
252
+ "learning_rate": 9.882233159632297e-06,
253
+ "loss": 1.0129,
254
+ "step": 170
255
+ },
256
+ {
257
+ "epoch": 0.16762452107279693,
258
+ "grad_norm": 0.26735645578353723,
259
+ "learning_rate": 9.863504382190838e-06,
260
+ "loss": 1.0255,
261
+ "step": 175
262
+ },
263
+ {
264
+ "epoch": 0.1724137931034483,
265
+ "grad_norm": 0.24596191852493296,
266
+ "learning_rate": 9.843414637252615e-06,
267
+ "loss": 1.0125,
268
+ "step": 180
269
+ },
270
+ {
271
+ "epoch": 0.1772030651340996,
272
+ "grad_norm": 0.2510254446995529,
273
+ "learning_rate": 9.821969546584922e-06,
274
+ "loss": 1.022,
275
+ "step": 185
276
+ },
277
+ {
278
+ "epoch": 0.18199233716475097,
279
+ "grad_norm": 0.2626288820781652,
280
+ "learning_rate": 9.79917511122509e-06,
281
+ "loss": 1.0016,
282
+ "step": 190
283
+ },
284
+ {
285
+ "epoch": 0.1867816091954023,
286
+ "grad_norm": 0.2588872211996587,
287
+ "learning_rate": 9.775037709801206e-06,
288
+ "loss": 1.0292,
289
+ "step": 195
290
+ },
291
+ {
292
+ "epoch": 0.19157088122605365,
293
+ "grad_norm": 0.28272889728543066,
294
+ "learning_rate": 9.749564096747148e-06,
295
+ "loss": 1.0255,
296
+ "step": 200
297
+ },
298
+ {
299
+ "epoch": 0.19636015325670497,
300
+ "grad_norm": 0.25691560926098406,
301
+ "learning_rate": 9.722761400412496e-06,
302
+ "loss": 1.0205,
303
+ "step": 205
304
+ },
305
+ {
306
+ "epoch": 0.20114942528735633,
307
+ "grad_norm": 0.31206698507075104,
308
+ "learning_rate": 9.694637121067764e-06,
309
+ "loss": 1.0018,
310
+ "step": 210
311
+ },
312
+ {
313
+ "epoch": 0.20593869731800765,
314
+ "grad_norm": 0.24725309050346184,
315
+ "learning_rate": 9.6651991288056e-06,
316
+ "loss": 1.013,
317
+ "step": 215
318
+ },
319
+ {
320
+ "epoch": 0.210727969348659,
321
+ "grad_norm": 0.2530356632425841,
322
+ "learning_rate": 9.63445566133846e-06,
323
+ "loss": 0.9921,
324
+ "step": 220
325
+ },
326
+ {
327
+ "epoch": 0.21551724137931033,
328
+ "grad_norm": 0.2784122469884546,
329
+ "learning_rate": 9.602415321693434e-06,
330
+ "loss": 1.0066,
331
+ "step": 225
332
+ },
333
+ {
334
+ "epoch": 0.22030651340996169,
335
+ "grad_norm": 0.3098632552557301,
336
+ "learning_rate": 9.569087075804842e-06,
337
+ "loss": 1.0062,
338
+ "step": 230
339
+ },
340
+ {
341
+ "epoch": 0.22509578544061304,
342
+ "grad_norm": 0.26451308039501314,
343
+ "learning_rate": 9.534480250005263e-06,
344
+ "loss": 0.9951,
345
+ "step": 235
346
+ },
347
+ {
348
+ "epoch": 0.22988505747126436,
349
+ "grad_norm": 0.27301629134605937,
350
+ "learning_rate": 9.498604528415731e-06,
351
+ "loss": 1.0347,
352
+ "step": 240
353
+ },
354
+ {
355
+ "epoch": 0.23467432950191572,
356
+ "grad_norm": 0.2891326880180606,
357
+ "learning_rate": 9.461469950235795e-06,
358
+ "loss": 1.0114,
359
+ "step": 245
360
+ },
361
+ {
362
+ "epoch": 0.23946360153256704,
363
+ "grad_norm": 0.26574747916476,
364
+ "learning_rate": 9.423086906934228e-06,
365
+ "loss": 1.0272,
366
+ "step": 250
367
+ },
368
+ {
369
+ "epoch": 0.2442528735632184,
370
+ "grad_norm": 0.26114773270931335,
371
+ "learning_rate": 9.38346613934115e-06,
372
+ "loss": 1.0039,
373
+ "step": 255
374
+ },
375
+ {
376
+ "epoch": 0.24904214559386972,
377
+ "grad_norm": 0.28383740993290196,
378
+ "learning_rate": 9.342618734642395e-06,
379
+ "loss": 1.0077,
380
+ "step": 260
381
+ },
382
+ {
383
+ "epoch": 0.25383141762452105,
384
+ "grad_norm": 0.28127484887890697,
385
+ "learning_rate": 9.300556123276955e-06,
386
+ "loss": 1.0306,
387
+ "step": 265
388
+ },
389
+ {
390
+ "epoch": 0.25862068965517243,
391
+ "grad_norm": 0.2702807344360773,
392
+ "learning_rate": 9.257290075738365e-06,
393
+ "loss": 0.9924,
394
+ "step": 270
395
+ },
396
+ {
397
+ "epoch": 0.26340996168582376,
398
+ "grad_norm": 0.2912366855364206,
399
+ "learning_rate": 9.212832699280942e-06,
400
+ "loss": 1.026,
401
+ "step": 275
402
+ },
403
+ {
404
+ "epoch": 0.2681992337164751,
405
+ "grad_norm": 0.30617702200806085,
406
+ "learning_rate": 9.16719643453177e-06,
407
+ "loss": 1.0247,
408
+ "step": 280
409
+ },
410
+ {
411
+ "epoch": 0.27298850574712646,
412
+ "grad_norm": 0.2590934020046758,
413
+ "learning_rate": 9.120394052009412e-06,
414
+ "loss": 1.0211,
415
+ "step": 285
416
+ },
417
+ {
418
+ "epoch": 0.2777777777777778,
419
+ "grad_norm": 0.27434400868197933,
420
+ "learning_rate": 9.072438648550304e-06,
421
+ "loss": 1.0243,
422
+ "step": 290
423
+ },
424
+ {
425
+ "epoch": 0.2825670498084291,
426
+ "grad_norm": 0.2813695079919259,
427
+ "learning_rate": 9.023343643643821e-06,
428
+ "loss": 1.0008,
429
+ "step": 295
430
+ },
431
+ {
432
+ "epoch": 0.28735632183908044,
433
+ "grad_norm": 0.2764043745947579,
434
+ "learning_rate": 8.973122775677078e-06,
435
+ "loss": 1.0066,
436
+ "step": 300
437
+ },
438
+ {
439
+ "epoch": 0.2921455938697318,
440
+ "grad_norm": 0.2993769190948342,
441
+ "learning_rate": 8.921790098090477e-06,
442
+ "loss": 1.015,
443
+ "step": 305
444
+ },
445
+ {
446
+ "epoch": 0.29693486590038315,
447
+ "grad_norm": 0.28718834695385625,
448
+ "learning_rate": 8.869359975445085e-06,
449
+ "loss": 1.0212,
450
+ "step": 310
451
+ },
452
+ {
453
+ "epoch": 0.3017241379310345,
454
+ "grad_norm": 0.3787907086651271,
455
+ "learning_rate": 8.815847079402972e-06,
456
+ "loss": 1.0079,
457
+ "step": 315
458
+ },
459
+ {
460
+ "epoch": 0.3065134099616858,
461
+ "grad_norm": 0.3012426621088511,
462
+ "learning_rate": 8.761266384621599e-06,
463
+ "loss": 1.0245,
464
+ "step": 320
465
+ },
466
+ {
467
+ "epoch": 0.3113026819923372,
468
+ "grad_norm": 0.545229044151294,
469
+ "learning_rate": 8.705633164563413e-06,
470
+ "loss": 1.0014,
471
+ "step": 325
472
+ },
473
+ {
474
+ "epoch": 0.3160919540229885,
475
+ "grad_norm": 0.3018913245413703,
476
+ "learning_rate": 8.648962987221837e-06,
477
+ "loss": 1.0035,
478
+ "step": 330
479
+ },
480
+ {
481
+ "epoch": 0.32088122605363983,
482
+ "grad_norm": 0.2937330943277994,
483
+ "learning_rate": 8.591271710764839e-06,
484
+ "loss": 0.9932,
485
+ "step": 335
486
+ },
487
+ {
488
+ "epoch": 0.32567049808429116,
489
+ "grad_norm": 0.30415781861083496,
490
+ "learning_rate": 8.532575479097294e-06,
491
+ "loss": 0.982,
492
+ "step": 340
493
+ },
494
+ {
495
+ "epoch": 0.33045977011494254,
496
+ "grad_norm": 0.2726446228184506,
497
+ "learning_rate": 8.472890717343391e-06,
498
+ "loss": 0.9992,
499
+ "step": 345
500
+ },
501
+ {
502
+ "epoch": 0.33524904214559387,
503
+ "grad_norm": 0.2841460417699516,
504
+ "learning_rate": 8.412234127250353e-06,
505
+ "loss": 1.0007,
506
+ "step": 350
507
+ },
508
+ {
509
+ "epoch": 0.3400383141762452,
510
+ "grad_norm": 0.313253280455998,
511
+ "learning_rate": 8.350622682514735e-06,
512
+ "loss": 0.9951,
513
+ "step": 355
514
+ },
515
+ {
516
+ "epoch": 0.3448275862068966,
517
+ "grad_norm": 0.33772538052784323,
518
+ "learning_rate": 8.288073624032634e-06,
519
+ "loss": 1.0169,
520
+ "step": 360
521
+ },
522
+ {
523
+ "epoch": 0.3496168582375479,
524
+ "grad_norm": 0.2810004404857808,
525
+ "learning_rate": 8.224604455075115e-06,
526
+ "loss": 1.0086,
527
+ "step": 365
528
+ },
529
+ {
530
+ "epoch": 0.3544061302681992,
531
+ "grad_norm": 0.2817546952474833,
532
+ "learning_rate": 8.160232936390239e-06,
533
+ "loss": 0.9888,
534
+ "step": 370
535
+ },
536
+ {
537
+ "epoch": 0.35919540229885055,
538
+ "grad_norm": 0.32316453097441,
539
+ "learning_rate": 8.094977081233006e-06,
540
+ "loss": 0.997,
541
+ "step": 375
542
+ },
543
+ {
544
+ "epoch": 0.36398467432950193,
545
+ "grad_norm": 0.2920029553424864,
546
+ "learning_rate": 8.02885515032467e-06,
547
+ "loss": 1.0172,
548
+ "step": 380
549
+ },
550
+ {
551
+ "epoch": 0.36877394636015326,
552
+ "grad_norm": 0.28736749227532504,
553
+ "learning_rate": 7.961885646742793e-06,
554
+ "loss": 1.0092,
555
+ "step": 385
556
+ },
557
+ {
558
+ "epoch": 0.3735632183908046,
559
+ "grad_norm": 0.29552008976856375,
560
+ "learning_rate": 7.894087310743468e-06,
561
+ "loss": 0.9952,
562
+ "step": 390
563
+ },
564
+ {
565
+ "epoch": 0.3783524904214559,
566
+ "grad_norm": 0.28037261402408636,
567
+ "learning_rate": 7.825479114517197e-06,
568
+ "loss": 1.0148,
569
+ "step": 395
570
+ },
571
+ {
572
+ "epoch": 0.3831417624521073,
573
+ "grad_norm": 0.3259544387907705,
574
+ "learning_rate": 7.756080256879837e-06,
575
+ "loss": 1.0172,
576
+ "step": 400
577
+ },
578
+ {
579
+ "epoch": 0.3879310344827586,
580
+ "grad_norm": 0.2866306787997861,
581
+ "learning_rate": 7.685910157900158e-06,
582
+ "loss": 0.9969,
583
+ "step": 405
584
+ },
585
+ {
586
+ "epoch": 0.39272030651340994,
587
+ "grad_norm": 0.27836252924957877,
588
+ "learning_rate": 7.614988453465469e-06,
589
+ "loss": 0.9981,
590
+ "step": 410
591
+ },
592
+ {
593
+ "epoch": 0.3975095785440613,
594
+ "grad_norm": 0.2881265537652179,
595
+ "learning_rate": 7.5433349897868445e-06,
596
+ "loss": 1.0075,
597
+ "step": 415
598
+ },
599
+ {
600
+ "epoch": 0.40229885057471265,
601
+ "grad_norm": 0.286210264814618,
602
+ "learning_rate": 7.470969817845518e-06,
603
+ "loss": 1.0025,
604
+ "step": 420
605
+ },
606
+ {
607
+ "epoch": 0.407088122605364,
608
+ "grad_norm": 0.28784515054514276,
609
+ "learning_rate": 7.397913187781962e-06,
610
+ "loss": 0.9918,
611
+ "step": 425
612
+ },
613
+ {
614
+ "epoch": 0.4118773946360153,
615
+ "grad_norm": 0.28225636678596183,
616
+ "learning_rate": 7.324185543229226e-06,
617
+ "loss": 1.0164,
618
+ "step": 430
619
+ },
620
+ {
621
+ "epoch": 0.4166666666666667,
622
+ "grad_norm": 0.30273570716493303,
623
+ "learning_rate": 7.249807515592149e-06,
624
+ "loss": 0.991,
625
+ "step": 435
626
+ },
627
+ {
628
+ "epoch": 0.421455938697318,
629
+ "grad_norm": 0.29638416230646264,
630
+ "learning_rate": 7.174799918274018e-06,
631
+ "loss": 1.0103,
632
+ "step": 440
633
+ },
634
+ {
635
+ "epoch": 0.42624521072796934,
636
+ "grad_norm": 0.27796285045314706,
637
+ "learning_rate": 7.099183740852296e-06,
638
+ "loss": 0.9929,
639
+ "step": 445
640
+ },
641
+ {
642
+ "epoch": 0.43103448275862066,
643
+ "grad_norm": 0.3024800017009751,
644
+ "learning_rate": 7.022980143205046e-06,
645
+ "loss": 0.9945,
646
+ "step": 450
647
+ },
648
+ {
649
+ "epoch": 0.43582375478927204,
650
+ "grad_norm": 0.308729839004599,
651
+ "learning_rate": 6.946210449589714e-06,
652
+ "loss": 1.0131,
653
+ "step": 455
654
+ },
655
+ {
656
+ "epoch": 0.44061302681992337,
657
+ "grad_norm": 0.29787252415350113,
658
+ "learning_rate": 6.868896142675903e-06,
659
+ "loss": 1.0053,
660
+ "step": 460
661
+ },
662
+ {
663
+ "epoch": 0.4454022988505747,
664
+ "grad_norm": 0.28555796101935293,
665
+ "learning_rate": 6.791058857533814e-06,
666
+ "loss": 1.0106,
667
+ "step": 465
668
+ },
669
+ {
670
+ "epoch": 0.4501915708812261,
671
+ "grad_norm": 0.27754653040108523,
672
+ "learning_rate": 6.712720375580057e-06,
673
+ "loss": 1.0127,
674
+ "step": 470
675
+ },
676
+ {
677
+ "epoch": 0.4549808429118774,
678
+ "grad_norm": 0.3064883256256046,
679
+ "learning_rate": 6.633902618482484e-06,
680
+ "loss": 1.0137,
681
+ "step": 475
682
+ },
683
+ {
684
+ "epoch": 0.45977011494252873,
685
+ "grad_norm": 0.2871757827962217,
686
+ "learning_rate": 6.554627642025807e-06,
687
+ "loss": 0.9808,
688
+ "step": 480
689
+ },
690
+ {
691
+ "epoch": 0.46455938697318006,
692
+ "grad_norm": 0.32016740057261645,
693
+ "learning_rate": 6.474917629939652e-06,
694
+ "loss": 1.0154,
695
+ "step": 485
696
+ },
697
+ {
698
+ "epoch": 0.46934865900383144,
699
+ "grad_norm": 0.28945157272232624,
700
+ "learning_rate": 6.394794887690838e-06,
701
+ "loss": 0.987,
702
+ "step": 490
703
+ },
704
+ {
705
+ "epoch": 0.47413793103448276,
706
+ "grad_norm": 0.45234637623400176,
707
+ "learning_rate": 6.314281836241573e-06,
708
+ "loss": 1.0072,
709
+ "step": 495
710
+ },
711
+ {
712
+ "epoch": 0.4789272030651341,
713
+ "grad_norm": 0.3043561526142259,
714
+ "learning_rate": 6.233401005775339e-06,
715
+ "loss": 0.9947,
716
+ "step": 500
717
+ },
718
+ {
719
+ "epoch": 0.4837164750957854,
720
+ "grad_norm": 0.3021339898048032,
721
+ "learning_rate": 6.1521750293922035e-06,
722
+ "loss": 1.0168,
723
+ "step": 505
724
+ },
725
+ {
726
+ "epoch": 0.4885057471264368,
727
+ "grad_norm": 0.262840671697266,
728
+ "learning_rate": 6.070626636775349e-06,
729
+ "loss": 0.9854,
730
+ "step": 510
731
+ },
732
+ {
733
+ "epoch": 0.4932950191570881,
734
+ "grad_norm": 0.2717384304425024,
735
+ "learning_rate": 5.988778647830554e-06,
736
+ "loss": 0.9847,
737
+ "step": 515
738
+ },
739
+ {
740
+ "epoch": 0.49808429118773945,
741
+ "grad_norm": 0.3005959006799932,
742
+ "learning_rate": 5.906653966300444e-06,
743
+ "loss": 1.0007,
744
+ "step": 520
745
+ },
746
+ {
747
+ "epoch": 0.5028735632183908,
748
+ "grad_norm": 0.2810709946318669,
749
+ "learning_rate": 5.824275573355278e-06,
750
+ "loss": 0.9891,
751
+ "step": 525
752
+ },
753
+ {
754
+ "epoch": 0.5076628352490421,
755
+ "grad_norm": 0.31324328313914135,
756
+ "learning_rate": 5.741666521162055e-06,
757
+ "loss": 1.0049,
758
+ "step": 530
759
+ },
760
+ {
761
+ "epoch": 0.5124521072796935,
762
+ "grad_norm": 0.29241234641844166,
763
+ "learning_rate": 5.658849926433774e-06,
764
+ "loss": 1.0019,
765
+ "step": 535
766
+ },
767
+ {
768
+ "epoch": 0.5172413793103449,
769
+ "grad_norm": 0.29226941844525284,
770
+ "learning_rate": 5.575848963960621e-06,
771
+ "loss": 0.9964,
772
+ "step": 540
773
+ },
774
+ {
775
+ "epoch": 0.5220306513409961,
776
+ "grad_norm": 0.2869562590824223,
777
+ "learning_rate": 5.4926868601249e-06,
778
+ "loss": 1.003,
779
+ "step": 545
780
+ },
781
+ {
782
+ "epoch": 0.5268199233716475,
783
+ "grad_norm": 0.2887086296052449,
784
+ "learning_rate": 5.4093868864015405e-06,
785
+ "loss": 0.9911,
786
+ "step": 550
787
+ },
788
+ {
789
+ "epoch": 0.5316091954022989,
790
+ "grad_norm": 0.2990975733324071,
791
+ "learning_rate": 5.325972352845965e-06,
792
+ "loss": 0.9961,
793
+ "step": 555
794
+ },
795
+ {
796
+ "epoch": 0.5363984674329502,
797
+ "grad_norm": 0.2964344269886325,
798
+ "learning_rate": 5.24246660157119e-06,
799
+ "loss": 1.0045,
800
+ "step": 560
801
+ },
802
+ {
803
+ "epoch": 0.5411877394636015,
804
+ "grad_norm": 0.3080697315686108,
805
+ "learning_rate": 5.1588930002159255e-06,
806
+ "loss": 0.9897,
807
+ "step": 565
808
+ },
809
+ {
810
+ "epoch": 0.5459770114942529,
811
+ "grad_norm": 0.3217970889035437,
812
+ "learning_rate": 5.075274935405554e-06,
813
+ "loss": 1.0022,
814
+ "step": 570
815
+ },
816
+ {
817
+ "epoch": 0.5507662835249042,
818
+ "grad_norm": 0.32311237689799727,
819
+ "learning_rate": 4.991635806207788e-06,
820
+ "loss": 0.9918,
821
+ "step": 575
822
+ },
823
+ {
824
+ "epoch": 0.5555555555555556,
825
+ "grad_norm": 0.3076150767243147,
826
+ "learning_rate": 4.90799901758484e-06,
827
+ "loss": 1.0156,
828
+ "step": 580
829
+ },
830
+ {
831
+ "epoch": 0.5603448275862069,
832
+ "grad_norm": 0.29579117268352634,
833
+ "learning_rate": 4.824387973843957e-06,
834
+ "loss": 0.9859,
835
+ "step": 585
836
+ },
837
+ {
838
+ "epoch": 0.5651340996168582,
839
+ "grad_norm": 0.27808702972070926,
840
+ "learning_rate": 4.74082607208812e-06,
841
+ "loss": 0.988,
842
+ "step": 590
843
+ },
844
+ {
845
+ "epoch": 0.5699233716475096,
846
+ "grad_norm": 0.2772865889207572,
847
+ "learning_rate": 4.6573366956687885e-06,
848
+ "loss": 1.0042,
849
+ "step": 595
850
+ },
851
+ {
852
+ "epoch": 0.5747126436781609,
853
+ "grad_norm": 0.2741988412251713,
854
+ "learning_rate": 4.573943207642452e-06,
855
+ "loss": 1.018,
856
+ "step": 600
857
+ },
858
+ {
859
+ "epoch": 0.5795019157088123,
860
+ "grad_norm": 0.3041843000888798,
861
+ "learning_rate": 4.4906689442328935e-06,
862
+ "loss": 1.0095,
863
+ "step": 605
864
+ },
865
+ {
866
+ "epoch": 0.5842911877394636,
867
+ "grad_norm": 0.3234627396285633,
868
+ "learning_rate": 4.407537208300957e-06,
869
+ "loss": 0.9981,
870
+ "step": 610
871
+ },
872
+ {
873
+ "epoch": 0.5890804597701149,
874
+ "grad_norm": 0.30361837699199173,
875
+ "learning_rate": 4.3245712628236356e-06,
876
+ "loss": 0.9945,
877
+ "step": 615
878
+ },
879
+ {
880
+ "epoch": 0.5938697318007663,
881
+ "grad_norm": 0.3068819956047309,
882
+ "learning_rate": 4.241794324384334e-06,
883
+ "loss": 0.9829,
884
+ "step": 620
885
+ },
886
+ {
887
+ "epoch": 0.5986590038314177,
888
+ "grad_norm": 0.3010857723276443,
889
+ "learning_rate": 4.159229556676111e-06,
890
+ "loss": 0.9778,
891
+ "step": 625
892
+ },
893
+ {
894
+ "epoch": 0.603448275862069,
895
+ "grad_norm": 0.3212249651416007,
896
+ "learning_rate": 4.076900064019721e-06,
897
+ "loss": 1.007,
898
+ "step": 630
899
+ },
900
+ {
901
+ "epoch": 0.6082375478927203,
902
+ "grad_norm": 0.2842660737481042,
903
+ "learning_rate": 3.994828884898267e-06,
904
+ "loss": 1.0056,
905
+ "step": 635
906
+ },
907
+ {
908
+ "epoch": 0.6130268199233716,
909
+ "grad_norm": 0.2887728882458368,
910
+ "learning_rate": 3.91303898551028e-06,
911
+ "loss": 1.0131,
912
+ "step": 640
913
+ },
914
+ {
915
+ "epoch": 0.617816091954023,
916
+ "grad_norm": 0.31374850764975326,
917
+ "learning_rate": 3.8315532533430285e-06,
918
+ "loss": 0.9979,
919
+ "step": 645
920
+ },
921
+ {
922
+ "epoch": 0.6226053639846744,
923
+ "grad_norm": 0.2990812859204113,
924
+ "learning_rate": 3.7503944907678543e-06,
925
+ "loss": 0.9979,
926
+ "step": 650
927
+ },
928
+ {
929
+ "epoch": 0.6273946360153256,
930
+ "grad_norm": 0.31174935012060845,
931
+ "learning_rate": 3.6695854086593126e-06,
932
+ "loss": 0.9907,
933
+ "step": 655
934
+ },
935
+ {
936
+ "epoch": 0.632183908045977,
937
+ "grad_norm": 0.30150077718156976,
938
+ "learning_rate": 3.5891486200399413e-06,
939
+ "loss": 0.9937,
940
+ "step": 660
941
+ },
942
+ {
943
+ "epoch": 0.6369731800766284,
944
+ "grad_norm": 0.304753149818871,
945
+ "learning_rate": 3.509106633752387e-06,
946
+ "loss": 1.0164,
947
+ "step": 665
948
+ },
949
+ {
950
+ "epoch": 0.6417624521072797,
951
+ "grad_norm": 0.2993950641947178,
952
+ "learning_rate": 3.429481848160702e-06,
953
+ "loss": 1.0093,
954
+ "step": 670
955
+ },
956
+ {
957
+ "epoch": 0.646551724137931,
958
+ "grad_norm": 0.3102107207132909,
959
+ "learning_rate": 3.350296544882543e-06,
960
+ "loss": 0.969,
961
+ "step": 675
962
+ },
963
+ {
964
+ "epoch": 0.6513409961685823,
965
+ "grad_norm": 0.27647028696280174,
966
+ "learning_rate": 3.2715728825540525e-06,
967
+ "loss": 1.0102,
968
+ "step": 680
969
+ },
970
+ {
971
+ "epoch": 0.6561302681992337,
972
+ "grad_norm": 0.30329159137172645,
973
+ "learning_rate": 3.19333289062915e-06,
974
+ "loss": 0.9992,
975
+ "step": 685
976
+ },
977
+ {
978
+ "epoch": 0.6609195402298851,
979
+ "grad_norm": 0.3233989260192753,
980
+ "learning_rate": 3.1155984632149565e-06,
981
+ "loss": 0.9984,
982
+ "step": 690
983
+ },
984
+ {
985
+ "epoch": 0.6657088122605364,
986
+ "grad_norm": 0.284592333087901,
987
+ "learning_rate": 3.0383913529451286e-06,
988
+ "loss": 1.0097,
989
+ "step": 695
990
+ },
991
+ {
992
+ "epoch": 0.6704980842911877,
993
+ "grad_norm": 0.284014773245373,
994
+ "learning_rate": 2.961733164892744e-06,
995
+ "loss": 1.0048,
996
+ "step": 700
997
+ },
998
+ {
999
+ "epoch": 0.6752873563218391,
1000
+ "grad_norm": 0.29864923395210635,
1001
+ "learning_rate": 2.8856453505245018e-06,
1002
+ "loss": 1.008,
1003
+ "step": 705
1004
+ },
1005
+ {
1006
+ "epoch": 0.6800766283524904,
1007
+ "grad_norm": 0.28455877244795186,
1008
+ "learning_rate": 2.8101492016979027e-06,
1009
+ "loss": 1.0082,
1010
+ "step": 710
1011
+ },
1012
+ {
1013
+ "epoch": 0.6848659003831418,
1014
+ "grad_norm": 0.29297293441472344,
1015
+ "learning_rate": 2.7352658447030882e-06,
1016
+ "loss": 1.0137,
1017
+ "step": 715
1018
+ },
1019
+ {
1020
+ "epoch": 0.6896551724137931,
1021
+ "grad_norm": 0.29774107321754356,
1022
+ "learning_rate": 2.6610162343510183e-06,
1023
+ "loss": 0.9878,
1024
+ "step": 720
1025
+ },
1026
+ {
1027
+ "epoch": 0.6944444444444444,
1028
+ "grad_norm": 0.2886426546973218,
1029
+ "learning_rate": 2.587421148109619e-06,
1030
+ "loss": 0.9855,
1031
+ "step": 725
1032
+ },
1033
+ {
1034
+ "epoch": 0.6992337164750958,
1035
+ "grad_norm": 0.30340545161406907,
1036
+ "learning_rate": 2.5145011802895835e-06,
1037
+ "loss": 1.004,
1038
+ "step": 730
1039
+ },
1040
+ {
1041
+ "epoch": 0.7040229885057471,
1042
+ "grad_norm": 0.28629744556011416,
1043
+ "learning_rate": 2.4422767362814045e-06,
1044
+ "loss": 0.9935,
1045
+ "step": 735
1046
+ },
1047
+ {
1048
+ "epoch": 0.7088122605363985,
1049
+ "grad_norm": 0.29480150488982737,
1050
+ "learning_rate": 2.370768026845276e-06,
1051
+ "loss": 1.0013,
1052
+ "step": 740
1053
+ },
1054
+ {
1055
+ "epoch": 0.7136015325670498,
1056
+ "grad_norm": 0.30424158130358797,
1057
+ "learning_rate": 2.299995062455459e-06,
1058
+ "loss": 0.9932,
1059
+ "step": 745
1060
+ },
1061
+ {
1062
+ "epoch": 0.7183908045977011,
1063
+ "grad_norm": 0.3179663498265742,
1064
+ "learning_rate": 2.2299776477007073e-06,
1065
+ "loss": 1.007,
1066
+ "step": 750
1067
+ },
1068
+ {
1069
+ "epoch": 0.7231800766283525,
1070
+ "grad_norm": 0.303515618658323,
1071
+ "learning_rate": 2.16073537574229e-06,
1072
+ "loss": 0.9963,
1073
+ "step": 755
1074
+ },
1075
+ {
1076
+ "epoch": 0.7279693486590039,
1077
+ "grad_norm": 0.30307569076630475,
1078
+ "learning_rate": 2.0922876228311833e-06,
1079
+ "loss": 0.9772,
1080
+ "step": 760
1081
+ },
1082
+ {
1083
+ "epoch": 0.7327586206896551,
1084
+ "grad_norm": 0.3143565291039063,
1085
+ "learning_rate": 2.0246535428859652e-06,
1086
+ "loss": 0.9899,
1087
+ "step": 765
1088
+ },
1089
+ {
1090
+ "epoch": 0.7375478927203065,
1091
+ "grad_norm": 0.28738666111079514,
1092
+ "learning_rate": 1.957852062132924e-06,
1093
+ "loss": 0.9848,
1094
+ "step": 770
1095
+ },
1096
+ {
1097
+ "epoch": 0.7423371647509579,
1098
+ "grad_norm": 0.2850271965375348,
1099
+ "learning_rate": 1.8919018738098704e-06,
1100
+ "loss": 1.0076,
1101
+ "step": 775
1102
+ },
1103
+ {
1104
+ "epoch": 0.7471264367816092,
1105
+ "grad_norm": 0.29940367717031596,
1106
+ "learning_rate": 1.8268214329351797e-06,
1107
+ "loss": 0.9864,
1108
+ "step": 780
1109
+ },
1110
+ {
1111
+ "epoch": 0.7519157088122606,
1112
+ "grad_norm": 0.2957695153675884,
1113
+ "learning_rate": 1.762628951143454e-06,
1114
+ "loss": 0.9972,
1115
+ "step": 785
1116
+ },
1117
+ {
1118
+ "epoch": 0.7567049808429118,
1119
+ "grad_norm": 0.2884019240313734,
1120
+ "learning_rate": 1.6993423915893241e-06,
1121
+ "loss": 0.9969,
1122
+ "step": 790
1123
+ },
1124
+ {
1125
+ "epoch": 0.7614942528735632,
1126
+ "grad_norm": 0.30882836660635715,
1127
+ "learning_rate": 1.6369794639207626e-06,
1128
+ "loss": 1.0005,
1129
+ "step": 795
1130
+ },
1131
+ {
1132
+ "epoch": 0.7662835249042146,
1133
+ "grad_norm": 0.2880996272538283,
1134
+ "learning_rate": 1.575557619323353e-06,
1135
+ "loss": 0.9853,
1136
+ "step": 800
1137
+ },
1138
+ {
1139
+ "epoch": 0.7710727969348659,
1140
+ "grad_norm": 0.28083999464104503,
1141
+ "learning_rate": 1.5150940456368784e-06,
1142
+ "loss": 0.9579,
1143
+ "step": 805
1144
+ },
1145
+ {
1146
+ "epoch": 0.7758620689655172,
1147
+ "grad_norm": 0.2977795145319976,
1148
+ "learning_rate": 1.4556056625455922e-06,
1149
+ "loss": 0.9944,
1150
+ "step": 810
1151
+ },
1152
+ {
1153
+ "epoch": 0.7806513409961686,
1154
+ "grad_norm": 0.3044689990672244,
1155
+ "learning_rate": 1.3971091168435463e-06,
1156
+ "loss": 0.997,
1157
+ "step": 815
1158
+ },
1159
+ {
1160
+ "epoch": 0.7854406130268199,
1161
+ "grad_norm": 0.2951506289424605,
1162
+ "learning_rate": 1.3396207777762732e-06,
1163
+ "loss": 1.0116,
1164
+ "step": 820
1165
+ },
1166
+ {
1167
+ "epoch": 0.7902298850574713,
1168
+ "grad_norm": 0.284297818009739,
1169
+ "learning_rate": 1.2831567324601325e-06,
1170
+ "loss": 0.9792,
1171
+ "step": 825
1172
+ },
1173
+ {
1174
+ "epoch": 0.7950191570881227,
1175
+ "grad_norm": 0.31450502170794314,
1176
+ "learning_rate": 1.2277327813806123e-06,
1177
+ "loss": 0.9927,
1178
+ "step": 830
1179
+ },
1180
+ {
1181
+ "epoch": 0.7998084291187739,
1182
+ "grad_norm": 0.2896076523698672,
1183
+ "learning_rate": 1.173364433970835e-06,
1184
+ "loss": 0.9795,
1185
+ "step": 835
1186
+ },
1187
+ {
1188
+ "epoch": 0.8045977011494253,
1189
+ "grad_norm": 0.2834770010415917,
1190
+ "learning_rate": 1.1200669042715163e-06,
1191
+ "loss": 0.9966,
1192
+ "step": 840
1193
+ },
1194
+ {
1195
+ "epoch": 0.8093869731800766,
1196
+ "grad_norm": 0.34009482243032485,
1197
+ "learning_rate": 1.0678551066735671e-06,
1198
+ "loss": 0.9767,
1199
+ "step": 845
1200
+ },
1201
+ {
1202
+ "epoch": 0.814176245210728,
1203
+ "grad_norm": 0.29936688994813515,
1204
+ "learning_rate": 1.0167436517445777e-06,
1205
+ "loss": 1.003,
1206
+ "step": 850
1207
+ },
1208
+ {
1209
+ "epoch": 0.8189655172413793,
1210
+ "grad_norm": 0.29389391810900556,
1211
+ "learning_rate": 9.66746842140287e-07,
1212
+ "loss": 0.9888,
1213
+ "step": 855
1214
+ },
1215
+ {
1216
+ "epoch": 0.8237547892720306,
1217
+ "grad_norm": 0.29840441886793445,
1218
+ "learning_rate": 9.178786686022417e-07,
1219
+ "loss": 1.0011,
1220
+ "step": 860
1221
+ },
1222
+ {
1223
+ "epoch": 0.828544061302682,
1224
+ "grad_norm": 0.3050673773124508,
1225
+ "learning_rate": 8.701528060427194e-07,
1226
+ "loss": 0.9867,
1227
+ "step": 865
1228
+ },
1229
+ {
1230
+ "epoch": 0.8333333333333334,
1231
+ "grad_norm": 0.2809885669064277,
1232
+ "learning_rate": 8.235826097180566e-07,
1233
+ "loss": 0.9802,
1234
+ "step": 870
1235
+ },
1236
+ {
1237
+ "epoch": 0.8381226053639846,
1238
+ "grad_norm": 0.29288746432276136,
1239
+ "learning_rate": 7.781811114913995e-07,
1240
+ "loss": 0.9965,
1241
+ "step": 875
1242
+ },
1243
+ {
1244
+ "epoch": 0.842911877394636,
1245
+ "grad_norm": 0.29110203494522685,
1246
+ "learning_rate": 7.339610161859618e-07,
1247
+ "loss": 0.9809,
1248
+ "step": 880
1249
+ },
1250
+ {
1251
+ "epoch": 0.8477011494252874,
1252
+ "grad_norm": 0.30532295542101273,
1253
+ "learning_rate": 6.909346980298093e-07,
1254
+ "loss": 1.0039,
1255
+ "step": 885
1256
+ },
1257
+ {
1258
+ "epoch": 0.8524904214559387,
1259
+ "grad_norm": 0.3178843529727934,
1260
+ "learning_rate": 6.49114197193137e-07,
1261
+ "loss": 0.9992,
1262
+ "step": 890
1263
+ },
1264
+ {
1265
+ "epoch": 0.85727969348659,
1266
+ "grad_norm": 0.30030807864509074,
1267
+ "learning_rate": 6.085112164190466e-07,
1268
+ "loss": 0.9967,
1269
+ "step": 895
1270
+ },
1271
+ {
1272
+ "epoch": 0.8620689655172413,
1273
+ "grad_norm": 0.283276426877559,
1274
+ "learning_rate": 5.691371177487215e-07,
1275
+ "loss": 0.9951,
1276
+ "step": 900
1277
+ },
1278
+ {
1279
+ "epoch": 0.8668582375478927,
1280
+ "grad_norm": 0.2771047728402987,
1281
+ "learning_rate": 5.310029193419697e-07,
1282
+ "loss": 0.9823,
1283
+ "step": 905
1284
+ },
1285
+ {
1286
+ "epoch": 0.8716475095785441,
1287
+ "grad_norm": 0.30200927801260424,
1288
+ "learning_rate": 4.941192923939769e-07,
1289
+ "loss": 0.9944,
1290
+ "step": 910
1291
+ },
1292
+ {
1293
+ "epoch": 0.8764367816091954,
1294
+ "grad_norm": 0.29243200143497705,
1295
+ "learning_rate": 4.5849655814915683e-07,
1296
+ "loss": 0.9923,
1297
+ "step": 915
1298
+ },
1299
+ {
1300
+ "epoch": 0.8812260536398467,
1301
+ "grad_norm": 0.38073762499381975,
1302
+ "learning_rate": 4.2414468501293217e-07,
1303
+ "loss": 0.9931,
1304
+ "step": 920
1305
+ },
1306
+ {
1307
+ "epoch": 0.8860153256704981,
1308
+ "grad_norm": 0.28457589633356245,
1309
+ "learning_rate": 3.9107328576224736e-07,
1310
+ "loss": 0.9879,
1311
+ "step": 925
1312
+ },
1313
+ {
1314
+ "epoch": 0.8908045977011494,
1315
+ "grad_norm": 0.29913915061920765,
1316
+ "learning_rate": 3.5929161485559694e-07,
1317
+ "loss": 1.0269,
1318
+ "step": 930
1319
+ },
1320
+ {
1321
+ "epoch": 0.8955938697318008,
1322
+ "grad_norm": 0.28795408054019833,
1323
+ "learning_rate": 3.2880856584333043e-07,
1324
+ "loss": 0.984,
1325
+ "step": 935
1326
+ },
1327
+ {
1328
+ "epoch": 0.9003831417624522,
1329
+ "grad_norm": 0.28456219117461123,
1330
+ "learning_rate": 2.9963266887894526e-07,
1331
+ "loss": 1.0007,
1332
+ "step": 940
1333
+ },
1334
+ {
1335
+ "epoch": 0.9051724137931034,
1336
+ "grad_norm": 0.30655155945282825,
1337
+ "learning_rate": 2.717720883320685e-07,
1338
+ "loss": 1.0093,
1339
+ "step": 945
1340
+ },
1341
+ {
1342
+ "epoch": 0.9099616858237548,
1343
+ "grad_norm": 0.28653578966574017,
1344
+ "learning_rate": 2.4523462050379864e-07,
1345
+ "loss": 0.9861,
1346
+ "step": 950
1347
+ },
1348
+ {
1349
+ "epoch": 0.9147509578544061,
1350
+ "grad_norm": 0.30075605446410747,
1351
+ "learning_rate": 2.2002769144504943e-07,
1352
+ "loss": 0.997,
1353
+ "step": 955
1354
+ },
1355
+ {
1356
+ "epoch": 0.9195402298850575,
1357
+ "grad_norm": 0.28984184857418177,
1358
+ "learning_rate": 1.9615835487849677e-07,
1359
+ "loss": 0.9772,
1360
+ "step": 960
1361
+ },
1362
+ {
1363
+ "epoch": 0.9243295019157088,
1364
+ "grad_norm": 0.2971882152661699,
1365
+ "learning_rate": 1.7363329022471564e-07,
1366
+ "loss": 1.0125,
1367
+ "step": 965
1368
+ },
1369
+ {
1370
+ "epoch": 0.9291187739463601,
1371
+ "grad_norm": 0.30026576782404996,
1372
+ "learning_rate": 1.5245880073305963e-07,
1373
+ "loss": 1.0128,
1374
+ "step": 970
1375
+ },
1376
+ {
1377
+ "epoch": 0.9339080459770115,
1378
+ "grad_norm": 0.2971224564720333,
1379
+ "learning_rate": 1.3264081171780797e-07,
1380
+ "loss": 1.0114,
1381
+ "step": 975
1382
+ },
1383
+ {
1384
+ "epoch": 0.9386973180076629,
1385
+ "grad_norm": 0.278422035094591,
1386
+ "learning_rate": 1.1418486890006574e-07,
1387
+ "loss": 0.982,
1388
+ "step": 980
1389
+ },
1390
+ {
1391
+ "epoch": 0.9434865900383141,
1392
+ "grad_norm": 0.29059990973676236,
1393
+ "learning_rate": 9.709613685589314e-08,
1394
+ "loss": 0.998,
1395
+ "step": 985
1396
+ },
1397
+ {
1398
+ "epoch": 0.9482758620689655,
1399
+ "grad_norm": 0.29092073403074437,
1400
+ "learning_rate": 8.137939757108526e-08,
1401
+ "loss": 1.011,
1402
+ "step": 990
1403
+ },
1404
+ {
1405
+ "epoch": 0.9530651340996169,
1406
+ "grad_norm": 0.2923297366913393,
1407
+ "learning_rate": 6.703904910301929e-08,
1408
+ "loss": 0.9656,
1409
+ "step": 995
1410
+ },
1411
+ {
1412
+ "epoch": 0.9578544061302682,
1413
+ "grad_norm": 0.2931478943843858,
1414
+ "learning_rate": 5.4079104349929465e-08,
1415
+ "loss": 1.0036,
1416
+ "step": 1000
1417
+ },
1418
+ {
1419
+ "epoch": 0.9626436781609196,
1420
+ "grad_norm": 0.2923376776086664,
1421
+ "learning_rate": 4.250318992797375e-08,
1422
+ "loss": 1.0083,
1423
+ "step": 1005
1424
+ },
1425
+ {
1426
+ "epoch": 0.9674329501915708,
1427
+ "grad_norm": 0.2863912260830555,
1428
+ "learning_rate": 3.231454515638221e-08,
1429
+ "loss": 0.9955,
1430
+ "step": 1010
1431
+ },
1432
+ {
1433
+ "epoch": 0.9722222222222222,
1434
+ "grad_norm": 0.2974177186982834,
1435
+ "learning_rate": 2.351602115099272e-08,
1436
+ "loss": 0.9865,
1437
+ "step": 1015
1438
+ },
1439
+ {
1440
+ "epoch": 0.9770114942528736,
1441
+ "grad_norm": 0.29561321400349233,
1442
+ "learning_rate": 1.6110080026414123e-08,
1443
+ "loss": 1.0083,
1444
+ "step": 1020
1445
+ },
1446
+ {
1447
+ "epoch": 0.9818007662835249,
1448
+ "grad_norm": 0.29932685669861786,
1449
+ "learning_rate": 1.0098794207047402e-08,
1450
+ "loss": 1.0118,
1451
+ "step": 1025
1452
+ },
1453
+ {
1454
+ "epoch": 0.9865900383141762,
1455
+ "grad_norm": 0.29155813115628504,
1456
+ "learning_rate": 5.483845847151226e-09,
1457
+ "loss": 0.9846,
1458
+ "step": 1030
1459
+ },
1460
+ {
1461
+ "epoch": 0.9913793103448276,
1462
+ "grad_norm": 0.2910040393119768,
1463
+ "learning_rate": 2.2665263601240328e-09,
1464
+ "loss": 0.9812,
1465
+ "step": 1035
1466
+ },
1467
+ {
1468
+ "epoch": 0.9961685823754789,
1469
+ "grad_norm": 0.28919827046140006,
1470
+ "learning_rate": 4.4773605712089554e-10,
1471
+ "loss": 1.0115,
1472
+ "step": 1040
1473
+ },
1474
+ {
1475
+ "epoch": 1.0,
1476
+ "eval_runtime": 6595.9355,
1477
+ "eval_samples_per_second": 3.504,
1478
+ "eval_steps_per_second": 0.876,
1479
+ "step": 1044
1480
+ },
1481
+ {
1482
+ "epoch": 1.0,
1483
+ "step": 1044,
1484
+ "total_flos": 1940427569627136.0,
1485
+ "train_loss": 1.0122476654034465,
1486
+ "train_runtime": 20247.7008,
1487
+ "train_samples_per_second": 3.298,
1488
+ "train_steps_per_second": 0.052
1489
+ }
1490
+ ],
1491
+ "logging_steps": 5,
1492
+ "max_steps": 1044,
1493
+ "num_input_tokens_seen": 0,
1494
+ "num_train_epochs": 1,
1495
+ "save_steps": 100,
1496
+ "stateful_callbacks": {
1497
+ "TrainerControl": {
1498
+ "args": {
1499
+ "should_epoch_stop": false,
1500
+ "should_evaluate": false,
1501
+ "should_log": false,
1502
+ "should_save": true,
1503
+ "should_training_stop": true
1504
+ },
1505
+ "attributes": {}
1506
+ }
1507
+ },
1508
+ "total_flos": 1940427569627136.0,
1509
+ "train_batch_size": 16,
1510
+ "trial_name": null,
1511
+ "trial_params": null
1512
+ }