alexander-hm commited on
Commit
daae4c6
1 Parent(s): ab24e2a

End of training

Browse files
Files changed (7) hide show
  1. README.md +112 -0
  2. all_results.json +12 -0
  3. completed +0 -0
  4. eval_results.json +7 -0
  5. metrics.json +1 -0
  6. train_results.json +8 -0
  7. trainer_state.json +0 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: huggyllama/llama-13b
3
+ library_name: peft
4
+ license: other
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: llama-13b_alpaca-clean_l0.0002_64
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # llama-13b_alpaca-clean_l0.0002_64
16
+
17
+ This model is a fine-tuned version of [huggyllama/llama-13b](https://huggingface.co/huggyllama/llama-13b) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.5226
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 1
41
+ - seed: 0
42
+ - gradient_accumulation_steps: 16
43
+ - total_train_batch_size: 16
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: constant
46
+ - lr_scheduler_warmup_ratio: 0.03
47
+ - training_steps: 0
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 1.1456 | 0.0003 | 1 | 2.3431 |
54
+ | 2.0148 | 0.0590 | 187 | 1.5488 |
55
+ | 1.2875 | 0.1179 | 374 | 1.5405 |
56
+ | 1.0668 | 0.1769 | 561 | 1.5543 |
57
+ | 1.8093 | 0.2359 | 748 | 1.4975 |
58
+ | 1.6438 | 0.2949 | 935 | 1.4835 |
59
+ | 1.2251 | 0.3538 | 1122 | 1.4810 |
60
+ | 1.0625 | 0.4128 | 1309 | 1.4741 |
61
+ | 1.8002 | 0.4718 | 1496 | 1.4496 |
62
+ | 1.5269 | 0.5307 | 1683 | 1.4526 |
63
+ | 1.1458 | 0.5897 | 1870 | 1.4545 |
64
+ | 1.0543 | 0.6487 | 2057 | 1.4612 |
65
+ | 2.2424 | 0.7077 | 2244 | 1.4442 |
66
+ | 1.593 | 0.7666 | 2431 | 1.4435 |
67
+ | 1.0416 | 0.8256 | 2618 | 1.4539 |
68
+ | 0.9933 | 0.8846 | 2805 | 1.4524 |
69
+ | 1.9771 | 0.9436 | 2992 | 1.4390 |
70
+ | 0.9435 | 1.0025 | 3179 | 1.4399 |
71
+ | 2.3091 | 1.0615 | 3366 | 1.4685 |
72
+ | 1.3242 | 1.1205 | 3553 | 1.4607 |
73
+ | 1.1381 | 1.1794 | 3740 | 1.4711 |
74
+ | 0.907 | 1.2384 | 3927 | 1.4860 |
75
+ | 1.752 | 1.2974 | 4114 | 1.4583 |
76
+ | 1.0621 | 1.3564 | 4301 | 1.4590 |
77
+ | 0.9417 | 1.4153 | 4488 | 1.4633 |
78
+ | 1.0226 | 1.4743 | 4675 | 1.4648 |
79
+ | 1.8375 | 1.5333 | 4862 | 1.4569 |
80
+ | 1.3047 | 1.5922 | 5049 | 1.4614 |
81
+ | 0.9083 | 1.6512 | 5236 | 1.4736 |
82
+ | 0.9209 | 1.7102 | 5423 | 1.4640 |
83
+ | 1.6807 | 1.7692 | 5610 | 1.4494 |
84
+ | 1.0549 | 1.8281 | 5797 | 1.4558 |
85
+ | 0.9171 | 1.8871 | 5984 | 1.4559 |
86
+ | 2.0487 | 1.9461 | 6171 | 1.4512 |
87
+ | 0.8636 | 2.0050 | 6358 | 1.4486 |
88
+ | 0.8722 | 2.0640 | 6545 | 1.5880 |
89
+ | 1.2758 | 2.1230 | 6732 | 1.5332 |
90
+ | 0.9294 | 2.1820 | 6919 | 1.5220 |
91
+ | 0.9638 | 2.2409 | 7106 | 1.5444 |
92
+ | 0.9522 | 2.2999 | 7293 | 1.5982 |
93
+ | 1.0788 | 2.3589 | 7480 | 1.5257 |
94
+ | 1.0903 | 2.4178 | 7667 | 1.5385 |
95
+ | 0.9291 | 2.4768 | 7854 | 1.5559 |
96
+ | 1.0212 | 2.5358 | 8041 | 1.5356 |
97
+ | 1.3065 | 2.5948 | 8228 | 1.5146 |
98
+ | 0.9102 | 2.6537 | 8415 | 1.5322 |
99
+ | 0.8117 | 2.7127 | 8602 | 1.5404 |
100
+ | 1.4213 | 2.7717 | 8789 | 1.5409 |
101
+ | 1.1398 | 2.8307 | 8976 | 1.5152 |
102
+ | 0.9868 | 2.8896 | 9163 | 1.5408 |
103
+ | 0.8449 | 2.9486 | 9350 | 1.5555 |
104
+
105
+
106
+ ### Framework versions
107
+
108
+ - PEFT 0.12.1.dev0
109
+ - Transformers 4.45.0.dev0
110
+ - Pytorch 2.3.0+cu121
111
+ - Datasets 2.19.0
112
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_loss": 1.5226281881332397,
4
+ "eval_runtime": 266.3195,
5
+ "eval_samples_per_second": 3.755,
6
+ "eval_steps_per_second": 3.755,
7
+ "total_flos": 2.1746709092395008e+18,
8
+ "train_loss": 1.2714244133601815,
9
+ "train_runtime": 406836.4039,
10
+ "train_samples_per_second": 0.374,
11
+ "train_steps_per_second": 0.023
12
+ }
completed ADDED
File without changes
eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "eval_loss": 1.5226281881332397,
4
+ "eval_runtime": 266.3195,
5
+ "eval_samples_per_second": 3.755,
6
+ "eval_steps_per_second": 3.755
7
+ }
metrics.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"run_name": "huggyllama/llama-13b_alpaca-clean_l0.0002_64", "train_runtime": 406836.4039, "train_samples_per_second": 0.374, "train_steps_per_second": 0.023, "total_flos": 2.1746709092395008e+18, "train_loss": 1.2714244133601815, "epoch": 3.0, "eval_loss": 1.5226281881332397, "eval_runtime": 266.3195, "eval_samples_per_second": 3.755, "eval_steps_per_second": 3.755}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.0,
3
+ "total_flos": 2.1746709092395008e+18,
4
+ "train_loss": 1.2714244133601815,
5
+ "train_runtime": 406836.4039,
6
+ "train_samples_per_second": 0.374,
7
+ "train_steps_per_second": 0.023
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff