just1nseo commited on
Commit
3fb2d1c
1 Parent(s): f6c74c2

Model save

Browse files
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: alignment-handbook/zephyr-7b-sft-full
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - trl
7
+ - dpo
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: zephyr-dpo-qlora-uf-ours-5e-6-epoch1
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # zephyr-dpo-qlora-uf-ours-5e-6-epoch1
18
+
19
+ This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.8935
22
+ - Rewards/chosen: -4.8886
23
+ - Rewards/rejected: -5.8224
24
+ - Rewards/accuracies: 0.6470
25
+ - Rewards/margins: 0.9339
26
+ - Rewards/margins Max: 4.5089
27
+ - Rewards/margins Min: -2.8033
28
+ - Rewards/margins Std: 2.4986
29
+ - Logps/rejected: -840.8231
30
+ - Logps/chosen: -773.4503
31
+ - Logits/rejected: -1.3417
32
+ - Logits/chosen: -1.4023
33
+
34
+ ## Model description
35
+
36
+ More information needed
37
+
38
+ ## Intended uses & limitations
39
+
40
+ More information needed
41
+
42
+ ## Training and evaluation data
43
+
44
+ More information needed
45
+
46
+ ## Training procedure
47
+
48
+ ### Training hyperparameters
49
+
50
+ The following hyperparameters were used during training:
51
+ - learning_rate: 5e-06
52
+ - train_batch_size: 4
53
+ - eval_batch_size: 8
54
+ - seed: 42
55
+ - distributed_type: multi-GPU
56
+ - num_devices: 2
57
+ - gradient_accumulation_steps: 2
58
+ - total_train_batch_size: 16
59
+ - total_eval_batch_size: 16
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: cosine
62
+ - lr_scheduler_warmup_ratio: 0.1
63
+ - num_epochs: 1
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Rewards/margins Max | Rewards/margins Min | Rewards/margins Std | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
68
+ |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:-------------------:|:-------------------:|:-------------------:|:--------------:|:------------:|:---------------:|:-------------:|
69
+ | 0.4419 | 0.28 | 100 | 0.6650 | -0.4529 | -0.5977 | 0.6240 | 0.1449 | 0.8619 | -0.5418 | 0.4692 | -318.3518 | -329.8811 | -2.5449 | -2.5719 |
70
+ | 0.1872 | 0.56 | 200 | 0.7859 | -2.8392 | -3.5400 | 0.6630 | 0.7008 | 3.4374 | -2.1827 | 1.9158 | -612.5828 | -568.5178 | -1.4159 | -1.4771 |
71
+ | 0.1102 | 0.85 | 300 | 0.8935 | -4.8886 | -5.8224 | 0.6470 | 0.9339 | 4.5089 | -2.8033 | 2.4986 | -840.8231 | -773.4503 | -1.3417 | -1.4023 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - PEFT 0.7.1
77
+ - Transformers 4.39.0.dev0
78
+ - Pytorch 2.1.2+cu121
79
+ - Datasets 2.14.6
80
+ - Tokenizers 0.15.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1d578e4e69ed96a831f72e6a642432a97f3d4b036874b499f746cc86d3bcc232
3
  size 671150064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8929f13332d1c0d3d17af32b08282af151e831d5361900cb1c75c6d84d6d704a
3
  size 671150064
all_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.3073710792501208,
4
+ "train_runtime": 4217.5244,
5
+ "train_samples": 5678,
6
+ "train_samples_per_second": 1.346,
7
+ "train_steps_per_second": 0.084
8
+ }
runs/Jul29_11-02-20_notebook-deployment-48-7d9b6c99-khd85/events.out.tfevents.1722251041.notebook-deployment-48-7d9b6c99-khd85.3446415.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:72365fe80b2e7b069719c942c50d19621d93adc008d0dd97a7753e2f761f936f
3
- size 35227
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbe9bc7605d1642b494dffc8cf3ff6bdc3dac1e5e352d0c497a23ba0c4ac1d5c
3
+ size 39981
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 1.0,
3
+ "train_loss": 0.3073710792501208,
4
+ "train_runtime": 4217.5244,
5
+ "train_samples": 5678,
6
+ "train_samples_per_second": 1.346,
7
+ "train_steps_per_second": 0.084
8
+ }
trainer_state.json ADDED
@@ -0,0 +1,735 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 1.0,
5
+ "eval_steps": 100,
6
+ "global_step": 355,
7
+ "is_hyper_param_search": false,
8
+ "is_local_process_zero": true,
9
+ "is_world_process_zero": true,
10
+ "log_history": [
11
+ {
12
+ "epoch": 0.0,
13
+ "grad_norm": 1.6023182644205256,
14
+ "learning_rate": 1.3888888888888888e-07,
15
+ "logits/chosen": -2.861618995666504,
16
+ "logits/rejected": -2.8205904960632324,
17
+ "logps/chosen": -271.06011962890625,
18
+ "logps/rejected": -211.1704559326172,
19
+ "loss": 0.6931,
20
+ "rewards/accuracies": 0.0,
21
+ "rewards/chosen": 0.0,
22
+ "rewards/margins": 0.0,
23
+ "rewards/margins_max": 0.0,
24
+ "rewards/margins_min": 0.0,
25
+ "rewards/margins_std": 0.0,
26
+ "rewards/rejected": 0.0,
27
+ "step": 1
28
+ },
29
+ {
30
+ "epoch": 0.03,
31
+ "grad_norm": 9.801703666985174,
32
+ "learning_rate": 1.3888888888888892e-06,
33
+ "logits/chosen": -2.833193778991699,
34
+ "logits/rejected": -2.7905588150024414,
35
+ "logps/chosen": -324.9515380859375,
36
+ "logps/rejected": -274.97601318359375,
37
+ "loss": 0.6927,
38
+ "rewards/accuracies": 0.5,
39
+ "rewards/chosen": 0.0009881681762635708,
40
+ "rewards/margins": 0.001091944519430399,
41
+ "rewards/margins_max": 0.005423115566372871,
42
+ "rewards/margins_min": -0.0020756451413035393,
43
+ "rewards/margins_std": 0.0033777032513171434,
44
+ "rewards/rejected": -0.00010377627768320963,
45
+ "step": 10
46
+ },
47
+ {
48
+ "epoch": 0.06,
49
+ "grad_norm": 1.8295784413634215,
50
+ "learning_rate": 2.7777777777777783e-06,
51
+ "logits/chosen": -2.7249505519866943,
52
+ "logits/rejected": -2.7066354751586914,
53
+ "logps/chosen": -293.10626220703125,
54
+ "logps/rejected": -215.7418670654297,
55
+ "loss": 0.6894,
56
+ "rewards/accuracies": 0.800000011920929,
57
+ "rewards/chosen": 0.007196377031505108,
58
+ "rewards/margins": 0.0068252841010689735,
59
+ "rewards/margins_max": 0.015210196375846863,
60
+ "rewards/margins_min": -0.0003913335967808962,
61
+ "rewards/margins_std": 0.006930469069629908,
62
+ "rewards/rejected": 0.0003710930759552866,
63
+ "step": 20
64
+ },
65
+ {
66
+ "epoch": 0.08,
67
+ "grad_norm": 2.0991251258429813,
68
+ "learning_rate": 4.166666666666667e-06,
69
+ "logits/chosen": -2.816267490386963,
70
+ "logits/rejected": -2.7468106746673584,
71
+ "logps/chosen": -300.4118347167969,
72
+ "logps/rejected": -232.51480102539062,
73
+ "loss": 0.6751,
74
+ "rewards/accuracies": 0.887499988079071,
75
+ "rewards/chosen": 0.03294036164879799,
76
+ "rewards/margins": 0.03294597938656807,
77
+ "rewards/margins_max": 0.06541319936513901,
78
+ "rewards/margins_min": 0.006184346973896027,
79
+ "rewards/margins_std": 0.02764921821653843,
80
+ "rewards/rejected": -5.6155026868509594e-06,
81
+ "step": 30
82
+ },
83
+ {
84
+ "epoch": 0.11,
85
+ "grad_norm": 1.7320040744134495,
86
+ "learning_rate": 4.998060489154965e-06,
87
+ "logits/chosen": -2.8301820755004883,
88
+ "logits/rejected": -2.751152753829956,
89
+ "logps/chosen": -271.0854187011719,
90
+ "logps/rejected": -225.5413360595703,
91
+ "loss": 0.6584,
92
+ "rewards/accuracies": 0.875,
93
+ "rewards/chosen": 0.06399233639240265,
94
+ "rewards/margins": 0.0620587058365345,
95
+ "rewards/margins_max": 0.128224715590477,
96
+ "rewards/margins_min": 0.005723648704588413,
97
+ "rewards/margins_std": 0.05614202097058296,
98
+ "rewards/rejected": 0.0019336309051141143,
99
+ "step": 40
100
+ },
101
+ {
102
+ "epoch": 0.14,
103
+ "grad_norm": 1.9286136496428266,
104
+ "learning_rate": 4.976275538042932e-06,
105
+ "logits/chosen": -2.7784149646759033,
106
+ "logits/rejected": -2.7074830532073975,
107
+ "logps/chosen": -263.76910400390625,
108
+ "logps/rejected": -236.1763458251953,
109
+ "loss": 0.6279,
110
+ "rewards/accuracies": 0.9375,
111
+ "rewards/chosen": 0.1180100291967392,
112
+ "rewards/margins": 0.14100034534931183,
113
+ "rewards/margins_max": 0.2934209704399109,
114
+ "rewards/margins_min": 0.029913101345300674,
115
+ "rewards/margins_std": 0.12193763256072998,
116
+ "rewards/rejected": -0.022990306839346886,
117
+ "step": 50
118
+ },
119
+ {
120
+ "epoch": 0.17,
121
+ "grad_norm": 2.19542131799087,
122
+ "learning_rate": 4.93049306999712e-06,
123
+ "logits/chosen": -2.6885108947753906,
124
+ "logits/rejected": -2.653852939605713,
125
+ "logps/chosen": -303.560791015625,
126
+ "logps/rejected": -276.5751647949219,
127
+ "loss": 0.5911,
128
+ "rewards/accuracies": 0.9624999761581421,
129
+ "rewards/chosen": 0.10078845918178558,
130
+ "rewards/margins": 0.23882508277893066,
131
+ "rewards/margins_max": 0.41835322976112366,
132
+ "rewards/margins_min": 0.06585223972797394,
133
+ "rewards/margins_std": 0.1579587757587433,
134
+ "rewards/rejected": -0.1380366086959839,
135
+ "step": 60
136
+ },
137
+ {
138
+ "epoch": 0.2,
139
+ "grad_norm": 1.8726629315170749,
140
+ "learning_rate": 4.861156761634014e-06,
141
+ "logits/chosen": -2.6910977363586426,
142
+ "logits/rejected": -2.637085437774658,
143
+ "logps/chosen": -308.81890869140625,
144
+ "logps/rejected": -251.6500701904297,
145
+ "loss": 0.5567,
146
+ "rewards/accuracies": 0.9375,
147
+ "rewards/chosen": 0.14042340219020844,
148
+ "rewards/margins": 0.3005082309246063,
149
+ "rewards/margins_max": 0.6100845336914062,
150
+ "rewards/margins_min": 0.049360163509845734,
151
+ "rewards/margins_std": 0.25483688712120056,
152
+ "rewards/rejected": -0.16008484363555908,
153
+ "step": 70
154
+ },
155
+ {
156
+ "epoch": 0.23,
157
+ "grad_norm": 3.2506013460624406,
158
+ "learning_rate": 4.7689385491773934e-06,
159
+ "logits/chosen": -2.6962907314300537,
160
+ "logits/rejected": -2.648644208908081,
161
+ "logps/chosen": -310.2759704589844,
162
+ "logps/rejected": -320.2915344238281,
163
+ "loss": 0.5126,
164
+ "rewards/accuracies": 0.949999988079071,
165
+ "rewards/chosen": 0.1099453717470169,
166
+ "rewards/margins": 0.4490734934806824,
167
+ "rewards/margins_max": 0.8111162185668945,
168
+ "rewards/margins_min": 0.09584008157253265,
169
+ "rewards/margins_std": 0.3334144353866577,
170
+ "rewards/rejected": -0.3391281068325043,
171
+ "step": 80
172
+ },
173
+ {
174
+ "epoch": 0.25,
175
+ "grad_norm": 3.7658324125158935,
176
+ "learning_rate": 4.654732116743193e-06,
177
+ "logits/chosen": -2.5777134895324707,
178
+ "logits/rejected": -2.555579423904419,
179
+ "logps/chosen": -265.8702697753906,
180
+ "logps/rejected": -246.2513427734375,
181
+ "loss": 0.456,
182
+ "rewards/accuracies": 0.987500011920929,
183
+ "rewards/chosen": 0.10124516487121582,
184
+ "rewards/margins": 0.5738865733146667,
185
+ "rewards/margins_max": 0.9613016843795776,
186
+ "rewards/margins_min": 0.21029099822044373,
187
+ "rewards/margins_std": 0.3518252670764923,
188
+ "rewards/rejected": -0.4726414680480957,
189
+ "step": 90
190
+ },
191
+ {
192
+ "epoch": 0.28,
193
+ "grad_norm": 7.900318438715333,
194
+ "learning_rate": 4.5196442356717526e-06,
195
+ "logits/chosen": -2.587555408477783,
196
+ "logits/rejected": -2.5667903423309326,
197
+ "logps/chosen": -279.7422790527344,
198
+ "logps/rejected": -321.7631530761719,
199
+ "loss": 0.4419,
200
+ "rewards/accuracies": 0.9125000238418579,
201
+ "rewards/chosen": 0.01669424958527088,
202
+ "rewards/margins": 0.6124511957168579,
203
+ "rewards/margins_max": 1.1129316091537476,
204
+ "rewards/margins_min": 0.16525402665138245,
205
+ "rewards/margins_std": 0.43622416257858276,
206
+ "rewards/rejected": -0.5957569479942322,
207
+ "step": 100
208
+ },
209
+ {
210
+ "epoch": 0.28,
211
+ "eval_logits/chosen": -2.5718588829040527,
212
+ "eval_logits/rejected": -2.544903039932251,
213
+ "eval_logps/chosen": -329.8811340332031,
214
+ "eval_logps/rejected": -318.3518371582031,
215
+ "eval_loss": 0.6649842262268066,
216
+ "eval_rewards/accuracies": 0.6240000128746033,
217
+ "eval_rewards/chosen": -0.4528772234916687,
218
+ "eval_rewards/margins": 0.14485196769237518,
219
+ "eval_rewards/margins_max": 0.8618865609169006,
220
+ "eval_rewards/margins_min": -0.5417762994766235,
221
+ "eval_rewards/margins_std": 0.46923625469207764,
222
+ "eval_rewards/rejected": -0.5977292656898499,
223
+ "eval_runtime": 428.7091,
224
+ "eval_samples_per_second": 4.665,
225
+ "eval_steps_per_second": 0.292,
226
+ "step": 100
227
+ },
228
+ {
229
+ "epoch": 0.31,
230
+ "grad_norm": 3.5447046302664655,
231
+ "learning_rate": 4.364984038837727e-06,
232
+ "logits/chosen": -2.6017634868621826,
233
+ "logits/rejected": -2.543693780899048,
234
+ "logps/chosen": -371.7081604003906,
235
+ "logps/rejected": -368.0250549316406,
236
+ "loss": 0.3877,
237
+ "rewards/accuracies": 0.9750000238418579,
238
+ "rewards/chosen": 0.028975462540984154,
239
+ "rewards/margins": 0.844887912273407,
240
+ "rewards/margins_max": 1.4032989740371704,
241
+ "rewards/margins_min": 0.22432711720466614,
242
+ "rewards/margins_std": 0.5291948914527893,
243
+ "rewards/rejected": -0.8159124255180359,
244
+ "step": 110
245
+ },
246
+ {
247
+ "epoch": 0.34,
248
+ "grad_norm": 2.8541926547535676,
249
+ "learning_rate": 4.192250333880045e-06,
250
+ "logits/chosen": -2.566699504852295,
251
+ "logits/rejected": -2.523707866668701,
252
+ "logps/chosen": -360.9166564941406,
253
+ "logps/rejected": -359.21478271484375,
254
+ "loss": 0.393,
255
+ "rewards/accuracies": 0.9375,
256
+ "rewards/chosen": -0.12530414760112762,
257
+ "rewards/margins": 0.8049014210700989,
258
+ "rewards/margins_max": 1.5037436485290527,
259
+ "rewards/margins_min": 0.22369906306266785,
260
+ "rewards/margins_std": 0.5746256709098816,
261
+ "rewards/rejected": -0.9302055239677429,
262
+ "step": 120
263
+ },
264
+ {
265
+ "epoch": 0.37,
266
+ "grad_norm": 2.676765116222706,
267
+ "learning_rate": 4.0031170782990214e-06,
268
+ "logits/chosen": -2.488229751586914,
269
+ "logits/rejected": -2.4302055835723877,
270
+ "logps/chosen": -412.65087890625,
271
+ "logps/rejected": -429.1094665527344,
272
+ "loss": 0.3581,
273
+ "rewards/accuracies": 0.9750000238418579,
274
+ "rewards/chosen": -0.3028665781021118,
275
+ "rewards/margins": 0.9839191436767578,
276
+ "rewards/margins_max": 1.651447057723999,
277
+ "rewards/margins_min": 0.29168885946273804,
278
+ "rewards/margins_std": 0.603247344493866,
279
+ "rewards/rejected": -1.2867857217788696,
280
+ "step": 130
281
+ },
282
+ {
283
+ "epoch": 0.39,
284
+ "grad_norm": 8.271231871514864,
285
+ "learning_rate": 3.7994171571810756e-06,
286
+ "logits/chosen": -2.3792262077331543,
287
+ "logits/rejected": -2.348208427429199,
288
+ "logps/chosen": -337.531005859375,
289
+ "logps/rejected": -398.3355407714844,
290
+ "loss": 0.3623,
291
+ "rewards/accuracies": 0.949999988079071,
292
+ "rewards/chosen": -0.19116966426372528,
293
+ "rewards/margins": 1.0936906337738037,
294
+ "rewards/margins_max": 1.9557828903198242,
295
+ "rewards/margins_min": 0.2191341668367386,
296
+ "rewards/margins_std": 0.7831128835678101,
297
+ "rewards/rejected": -1.284860372543335,
298
+ "step": 140
299
+ },
300
+ {
301
+ "epoch": 0.42,
302
+ "grad_norm": 6.267090889931009,
303
+ "learning_rate": 3.5831246207606597e-06,
304
+ "logits/chosen": -2.244743585586548,
305
+ "logits/rejected": -2.207181453704834,
306
+ "logps/chosen": -340.91412353515625,
307
+ "logps/rejected": -383.2621765136719,
308
+ "loss": 0.3039,
309
+ "rewards/accuracies": 0.949999988079071,
310
+ "rewards/chosen": -0.5222915410995483,
311
+ "rewards/margins": 1.1795024871826172,
312
+ "rewards/margins_max": 2.02280855178833,
313
+ "rewards/margins_min": 0.273987352848053,
314
+ "rewards/margins_std": 0.7898725271224976,
315
+ "rewards/rejected": -1.701793909072876,
316
+ "step": 150
317
+ },
318
+ {
319
+ "epoch": 0.45,
320
+ "grad_norm": 5.800242307139694,
321
+ "learning_rate": 3.3563355539546795e-06,
322
+ "logits/chosen": -2.081348419189453,
323
+ "logits/rejected": -2.017423629760742,
324
+ "logps/chosen": -342.2528991699219,
325
+ "logps/rejected": -432.951171875,
326
+ "loss": 0.2389,
327
+ "rewards/accuracies": 0.9624999761581421,
328
+ "rewards/chosen": -0.41706037521362305,
329
+ "rewards/margins": 1.5685182809829712,
330
+ "rewards/margins_max": 2.661461114883423,
331
+ "rewards/margins_min": 0.5422362685203552,
332
+ "rewards/margins_std": 0.971255898475647,
333
+ "rewards/rejected": -1.9855785369873047,
334
+ "step": 160
335
+ },
336
+ {
337
+ "epoch": 0.48,
338
+ "grad_norm": 10.002190159330027,
339
+ "learning_rate": 3.121247763262235e-06,
340
+ "logits/chosen": -2.031193971633911,
341
+ "logits/rejected": -1.9694950580596924,
342
+ "logps/chosen": -359.4656982421875,
343
+ "logps/rejected": -506.41217041015625,
344
+ "loss": 0.2274,
345
+ "rewards/accuracies": 0.9750000238418579,
346
+ "rewards/chosen": -0.2984406650066376,
347
+ "rewards/margins": 1.8030246496200562,
348
+ "rewards/margins_max": 2.6862552165985107,
349
+ "rewards/margins_min": 0.6847059726715088,
350
+ "rewards/margins_std": 0.9101365208625793,
351
+ "rewards/rejected": -2.1014654636383057,
352
+ "step": 170
353
+ },
354
+ {
355
+ "epoch": 0.51,
356
+ "grad_norm": 7.495552270047072,
357
+ "learning_rate": 2.8801394778833475e-06,
358
+ "logits/chosen": -1.804452657699585,
359
+ "logits/rejected": -1.6888850927352905,
360
+ "logps/chosen": -424.28533935546875,
361
+ "logps/rejected": -582.659423828125,
362
+ "loss": 0.2208,
363
+ "rewards/accuracies": 1.0,
364
+ "rewards/chosen": -0.923845112323761,
365
+ "rewards/margins": 1.9609006643295288,
366
+ "rewards/margins_max": 3.038311719894409,
367
+ "rewards/margins_min": 0.9176443219184875,
368
+ "rewards/margins_std": 0.9313791394233704,
369
+ "rewards/rejected": -2.8847455978393555,
370
+ "step": 180
371
+ },
372
+ {
373
+ "epoch": 0.54,
374
+ "grad_norm": 5.26489542539375,
375
+ "learning_rate": 2.6353472714635443e-06,
376
+ "logits/chosen": -1.6563403606414795,
377
+ "logits/rejected": -1.4980707168579102,
378
+ "logps/chosen": -365.2936096191406,
379
+ "logps/rejected": -520.5008544921875,
380
+ "loss": 0.1887,
381
+ "rewards/accuracies": 0.9750000238418579,
382
+ "rewards/chosen": -0.4333272874355316,
383
+ "rewards/margins": 2.3968050479888916,
384
+ "rewards/margins_max": 3.5410873889923096,
385
+ "rewards/margins_min": 1.2592780590057373,
386
+ "rewards/margins_std": 1.0371885299682617,
387
+ "rewards/rejected": -2.830132007598877,
388
+ "step": 190
389
+ },
390
+ {
391
+ "epoch": 0.56,
392
+ "grad_norm": 6.247380737778327,
393
+ "learning_rate": 2.3892434184240536e-06,
394
+ "logits/chosen": -1.6041126251220703,
395
+ "logits/rejected": -1.4733774662017822,
396
+ "logps/chosen": -478.11541748046875,
397
+ "logps/rejected": -663.8895874023438,
398
+ "loss": 0.1872,
399
+ "rewards/accuracies": 0.925000011920929,
400
+ "rewards/chosen": -1.3858261108398438,
401
+ "rewards/margins": 2.5970957279205322,
402
+ "rewards/margins_max": 4.095944404602051,
403
+ "rewards/margins_min": 0.5976935625076294,
404
+ "rewards/margins_std": 1.6118961572647095,
405
+ "rewards/rejected": -3.982921600341797,
406
+ "step": 200
407
+ },
408
+ {
409
+ "epoch": 0.56,
410
+ "eval_logits/chosen": -1.4771034717559814,
411
+ "eval_logits/rejected": -1.4158743619918823,
412
+ "eval_logps/chosen": -568.5177612304688,
413
+ "eval_logps/rejected": -612.582763671875,
414
+ "eval_loss": 0.7859476804733276,
415
+ "eval_rewards/accuracies": 0.6629999876022339,
416
+ "eval_rewards/chosen": -2.839242935180664,
417
+ "eval_rewards/margins": 0.70079505443573,
418
+ "eval_rewards/margins_max": 3.4374256134033203,
419
+ "eval_rewards/margins_min": -2.182706356048584,
420
+ "eval_rewards/margins_std": 1.9158276319503784,
421
+ "eval_rewards/rejected": -3.5400378704071045,
422
+ "eval_runtime": 428.5233,
423
+ "eval_samples_per_second": 4.667,
424
+ "eval_steps_per_second": 0.292,
425
+ "step": 200
426
+ },
427
+ {
428
+ "epoch": 0.59,
429
+ "grad_norm": 5.190377965258524,
430
+ "learning_rate": 2.1442129043167877e-06,
431
+ "logits/chosen": -1.4293570518493652,
432
+ "logits/rejected": -1.3583998680114746,
433
+ "logps/chosen": -448.363037109375,
434
+ "logps/rejected": -679.0516357421875,
435
+ "loss": 0.1699,
436
+ "rewards/accuracies": 0.9750000238418579,
437
+ "rewards/chosen": -1.281787395477295,
438
+ "rewards/margins": 2.944481134414673,
439
+ "rewards/margins_max": 4.665500640869141,
440
+ "rewards/margins_min": 0.9343290328979492,
441
+ "rewards/margins_std": 1.711601972579956,
442
+ "rewards/rejected": -4.226268291473389,
443
+ "step": 210
444
+ },
445
+ {
446
+ "epoch": 0.62,
447
+ "grad_norm": 4.464327030535654,
448
+ "learning_rate": 1.9026303129961049e-06,
449
+ "logits/chosen": -1.474830150604248,
450
+ "logits/rejected": -1.3183205127716064,
451
+ "logps/chosen": -591.7376708984375,
452
+ "logps/rejected": -873.0149536132812,
453
+ "loss": 0.1359,
454
+ "rewards/accuracies": 0.949999988079071,
455
+ "rewards/chosen": -2.382814407348633,
456
+ "rewards/margins": 3.604060411453247,
457
+ "rewards/margins_max": 4.968972206115723,
458
+ "rewards/margins_min": 1.4786490201950073,
459
+ "rewards/margins_std": 1.6280790567398071,
460
+ "rewards/rejected": -5.986874580383301,
461
+ "step": 220
462
+ },
463
+ {
464
+ "epoch": 0.65,
465
+ "grad_norm": 11.671302383042187,
466
+ "learning_rate": 1.66683681459314e-06,
467
+ "logits/chosen": -1.4693548679351807,
468
+ "logits/rejected": -1.3056604862213135,
469
+ "logps/chosen": -644.6790771484375,
470
+ "logps/rejected": -878.6692504882812,
471
+ "loss": 0.146,
472
+ "rewards/accuracies": 0.9375,
473
+ "rewards/chosen": -2.730034351348877,
474
+ "rewards/margins": 3.4357237815856934,
475
+ "rewards/margins_max": 5.224689483642578,
476
+ "rewards/margins_min": 1.111033320426941,
477
+ "rewards/margins_std": 1.8828909397125244,
478
+ "rewards/rejected": -6.165757656097412,
479
+ "step": 230
480
+ },
481
+ {
482
+ "epoch": 0.68,
483
+ "grad_norm": 5.647778822191043,
484
+ "learning_rate": 1.4391174773015836e-06,
485
+ "logits/chosen": -1.4095017910003662,
486
+ "logits/rejected": -1.2933498620986938,
487
+ "logps/chosen": -636.0205688476562,
488
+ "logps/rejected": -929.3019409179688,
489
+ "loss": 0.1482,
490
+ "rewards/accuracies": 0.949999988079071,
491
+ "rewards/chosen": -3.1122348308563232,
492
+ "rewards/margins": 3.3490359783172607,
493
+ "rewards/margins_max": 5.252045631408691,
494
+ "rewards/margins_min": 1.3964684009552002,
495
+ "rewards/margins_std": 1.7266426086425781,
496
+ "rewards/rejected": -6.461270809173584,
497
+ "step": 240
498
+ },
499
+ {
500
+ "epoch": 0.7,
501
+ "grad_norm": 29.012737765276803,
502
+ "learning_rate": 1.2216791228457778e-06,
503
+ "logits/chosen": -1.3994617462158203,
504
+ "logits/rejected": -1.2709558010101318,
505
+ "logps/chosen": -621.2999267578125,
506
+ "logps/rejected": -928.4699096679688,
507
+ "loss": 0.1235,
508
+ "rewards/accuracies": 0.9624999761581421,
509
+ "rewards/chosen": -3.1001884937286377,
510
+ "rewards/margins": 3.6740684509277344,
511
+ "rewards/margins_max": 5.534354209899902,
512
+ "rewards/margins_min": 1.508026361465454,
513
+ "rewards/margins_std": 1.7670818567276,
514
+ "rewards/rejected": -6.774256706237793,
515
+ "step": 250
516
+ },
517
+ {
518
+ "epoch": 0.73,
519
+ "grad_norm": 4.553216346552263,
520
+ "learning_rate": 1.0166289402331391e-06,
521
+ "logits/chosen": -1.4789546728134155,
522
+ "logits/rejected": -1.3316059112548828,
523
+ "logps/chosen": -634.4136962890625,
524
+ "logps/rejected": -945.2862548828125,
525
+ "loss": 0.1438,
526
+ "rewards/accuracies": 0.9375,
527
+ "rewards/chosen": -3.4232017993927,
528
+ "rewards/margins": 3.440350294113159,
529
+ "rewards/margins_max": 5.2788987159729,
530
+ "rewards/margins_min": 1.4436607360839844,
531
+ "rewards/margins_std": 1.7658751010894775,
532
+ "rewards/rejected": -6.863552093505859,
533
+ "step": 260
534
+ },
535
+ {
536
+ "epoch": 0.76,
537
+ "grad_norm": 6.13292881557691,
538
+ "learning_rate": 8.259540650444736e-07,
539
+ "logits/chosen": -1.3842976093292236,
540
+ "logits/rejected": -1.286069631576538,
541
+ "logps/chosen": -662.1298217773438,
542
+ "logps/rejected": -982.2066650390625,
543
+ "loss": 0.1621,
544
+ "rewards/accuracies": 0.9125000238418579,
545
+ "rewards/chosen": -3.530142307281494,
546
+ "rewards/margins": 3.6669273376464844,
547
+ "rewards/margins_max": 5.465144157409668,
548
+ "rewards/margins_min": 1.4210524559020996,
549
+ "rewards/margins_std": 1.8017559051513672,
550
+ "rewards/rejected": -7.197070121765137,
551
+ "step": 270
552
+ },
553
+ {
554
+ "epoch": 0.79,
555
+ "grad_norm": 8.782327714810695,
556
+ "learning_rate": 6.515023221586722e-07,
557
+ "logits/chosen": -1.3697433471679688,
558
+ "logits/rejected": -1.28987455368042,
559
+ "logps/chosen": -655.1025390625,
560
+ "logps/rejected": -1005.9417724609375,
561
+ "loss": 0.1048,
562
+ "rewards/accuracies": 0.987500011920929,
563
+ "rewards/chosen": -3.516455888748169,
564
+ "rewards/margins": 3.8169894218444824,
565
+ "rewards/margins_max": 5.726956367492676,
566
+ "rewards/margins_min": 1.5342118740081787,
567
+ "rewards/margins_std": 1.8832015991210938,
568
+ "rewards/rejected": -7.3334455490112305,
569
+ "step": 280
570
+ },
571
+ {
572
+ "epoch": 0.82,
573
+ "grad_norm": 16.94834231932571,
574
+ "learning_rate": 4.949643185335288e-07,
575
+ "logits/chosen": -1.4026738405227661,
576
+ "logits/rejected": -1.3095262050628662,
577
+ "logps/chosen": -593.7080078125,
578
+ "logps/rejected": -906.31298828125,
579
+ "loss": 0.1582,
580
+ "rewards/accuracies": 0.9375,
581
+ "rewards/chosen": -3.067917823791504,
582
+ "rewards/margins": 3.355029344558716,
583
+ "rewards/margins_max": 5.543763637542725,
584
+ "rewards/margins_min": 1.00192391872406,
585
+ "rewards/margins_std": 2.0951552391052246,
586
+ "rewards/rejected": -6.422947883605957,
587
+ "step": 290
588
+ },
589
+ {
590
+ "epoch": 0.85,
591
+ "grad_norm": 9.830855192004018,
592
+ "learning_rate": 3.578570595810274e-07,
593
+ "logits/chosen": -1.5070269107818604,
594
+ "logits/rejected": -1.3751003742218018,
595
+ "logps/chosen": -646.901123046875,
596
+ "logps/rejected": -984.0985107421875,
597
+ "loss": 0.1102,
598
+ "rewards/accuracies": 0.9624999761581421,
599
+ "rewards/chosen": -3.0038809776306152,
600
+ "rewards/margins": 3.9427695274353027,
601
+ "rewards/margins_max": 5.859181880950928,
602
+ "rewards/margins_min": 1.6614525318145752,
603
+ "rewards/margins_std": 1.9274225234985352,
604
+ "rewards/rejected": -6.946650505065918,
605
+ "step": 300
606
+ },
607
+ {
608
+ "epoch": 0.85,
609
+ "eval_logits/chosen": -1.4022948741912842,
610
+ "eval_logits/rejected": -1.3416800498962402,
611
+ "eval_logps/chosen": -773.4503173828125,
612
+ "eval_logps/rejected": -840.8230590820312,
613
+ "eval_loss": 0.8934583067893982,
614
+ "eval_rewards/accuracies": 0.6470000147819519,
615
+ "eval_rewards/chosen": -4.888567924499512,
616
+ "eval_rewards/margins": 0.9338728189468384,
617
+ "eval_rewards/margins_max": 4.508862495422363,
618
+ "eval_rewards/margins_min": -2.803253412246704,
619
+ "eval_rewards/margins_std": 2.4985644817352295,
620
+ "eval_rewards/rejected": -5.822441577911377,
621
+ "eval_runtime": 428.3228,
622
+ "eval_samples_per_second": 4.669,
623
+ "eval_steps_per_second": 0.292,
624
+ "step": 300
625
+ },
626
+ {
627
+ "epoch": 0.87,
628
+ "grad_norm": 25.66825260695654,
629
+ "learning_rate": 2.4150924791035037e-07,
630
+ "logits/chosen": -1.4382137060165405,
631
+ "logits/rejected": -1.2731282711029053,
632
+ "logps/chosen": -688.03857421875,
633
+ "logps/rejected": -948.1687622070312,
634
+ "loss": 0.1452,
635
+ "rewards/accuracies": 0.9624999761581421,
636
+ "rewards/chosen": -3.9022278785705566,
637
+ "rewards/margins": 3.437220335006714,
638
+ "rewards/margins_max": 5.5707316398620605,
639
+ "rewards/margins_min": 1.006291389465332,
640
+ "rewards/margins_std": 2.0285696983337402,
641
+ "rewards/rejected": -7.339447975158691,
642
+ "step": 310
643
+ },
644
+ {
645
+ "epoch": 0.9,
646
+ "grad_norm": 24.605253871204138,
647
+ "learning_rate": 1.4704840690808658e-07,
648
+ "logits/chosen": -1.4029080867767334,
649
+ "logits/rejected": -1.3075344562530518,
650
+ "logps/chosen": -682.196044921875,
651
+ "logps/rejected": -989.9052734375,
652
+ "loss": 0.1673,
653
+ "rewards/accuracies": 0.9125000238418579,
654
+ "rewards/chosen": -3.739229202270508,
655
+ "rewards/margins": 3.553121566772461,
656
+ "rewards/margins_max": 5.583710670471191,
657
+ "rewards/margins_min": 0.8181284070014954,
658
+ "rewards/margins_std": 2.200822114944458,
659
+ "rewards/rejected": -7.292350769042969,
660
+ "step": 320
661
+ },
662
+ {
663
+ "epoch": 0.93,
664
+ "grad_norm": 9.663524467768484,
665
+ "learning_rate": 7.538995394063996e-08,
666
+ "logits/chosen": -1.5248098373413086,
667
+ "logits/rejected": -1.3775807619094849,
668
+ "logps/chosen": -683.1748046875,
669
+ "logps/rejected": -983.3424682617188,
670
+ "loss": 0.1405,
671
+ "rewards/accuracies": 0.9624999761581421,
672
+ "rewards/chosen": -3.2885494232177734,
673
+ "rewards/margins": 3.875479221343994,
674
+ "rewards/margins_max": 5.90716552734375,
675
+ "rewards/margins_min": 1.6211687326431274,
676
+ "rewards/margins_std": 1.9896876811981201,
677
+ "rewards/rejected": -7.164028167724609,
678
+ "step": 330
679
+ },
680
+ {
681
+ "epoch": 0.96,
682
+ "grad_norm": 3.7848904322458856,
683
+ "learning_rate": 2.722832907015971e-08,
684
+ "logits/chosen": -1.389829397201538,
685
+ "logits/rejected": -1.2816789150238037,
686
+ "logps/chosen": -664.4754638671875,
687
+ "logps/rejected": -1003.7318115234375,
688
+ "loss": 0.115,
689
+ "rewards/accuracies": 0.987500011920929,
690
+ "rewards/chosen": -3.6462998390197754,
691
+ "rewards/margins": 3.879322052001953,
692
+ "rewards/margins_max": 5.659389019012451,
693
+ "rewards/margins_min": 1.880358338356018,
694
+ "rewards/margins_std": 1.7462043762207031,
695
+ "rewards/rejected": -7.5256218910217285,
696
+ "step": 340
697
+ },
698
+ {
699
+ "epoch": 0.99,
700
+ "grad_norm": 5.242067065610791,
701
+ "learning_rate": 3.030265255329623e-09,
702
+ "logits/chosen": -1.377044439315796,
703
+ "logits/rejected": -1.2859034538269043,
704
+ "logps/chosen": -652.96337890625,
705
+ "logps/rejected": -1036.343505859375,
706
+ "loss": 0.1061,
707
+ "rewards/accuracies": 0.9750000238418579,
708
+ "rewards/chosen": -3.380042552947998,
709
+ "rewards/margins": 4.118873119354248,
710
+ "rewards/margins_max": 5.629302501678467,
711
+ "rewards/margins_min": 2.122800827026367,
712
+ "rewards/margins_std": 1.6110029220581055,
713
+ "rewards/rejected": -7.4989166259765625,
714
+ "step": 350
715
+ },
716
+ {
717
+ "epoch": 1.0,
718
+ "step": 355,
719
+ "total_flos": 0.0,
720
+ "train_loss": 0.3073710792501208,
721
+ "train_runtime": 4217.5244,
722
+ "train_samples_per_second": 1.346,
723
+ "train_steps_per_second": 0.084
724
+ }
725
+ ],
726
+ "logging_steps": 10,
727
+ "max_steps": 355,
728
+ "num_input_tokens_seen": 0,
729
+ "num_train_epochs": 1,
730
+ "save_steps": 100,
731
+ "total_flos": 0.0,
732
+ "train_batch_size": 4,
733
+ "trial_name": null,
734
+ "trial_params": null
735
+ }