pakphum commited on
Commit
ad1cce6
·
verified ·
1 Parent(s): e841950

Model save

Browse files
Files changed (1) hide show
  1. README.md +112 -0
README.md ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: peft
3
+ license: llama3.2
4
+ base_model: meta-llama/Llama-3.2-3B-Instruct
5
+ tags:
6
+ - llama-factory
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: qlora-llama3b-iterative
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # qlora-llama3b-iterative
17
+
18
+ This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.0051
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0002
40
+ - train_batch_size: 1
41
+ - eval_batch_size: 1
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 8
44
+ - total_train_batch_size: 8
45
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.1
48
+ - training_steps: 500
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|
54
+ | 2.1156 | 0.0889 | 10 | 1.5894 |
55
+ | 1.1893 | 0.1778 | 20 | 0.6868 |
56
+ | 0.5218 | 0.2667 | 30 | 0.4555 |
57
+ | 0.5292 | 0.3556 | 40 | 0.3795 |
58
+ | 0.3866 | 0.4444 | 50 | 0.3065 |
59
+ | 0.3232 | 0.5333 | 60 | 0.2074 |
60
+ | 0.1802 | 0.6222 | 70 | 0.1532 |
61
+ | 0.21 | 0.7111 | 80 | 0.1348 |
62
+ | 0.158 | 0.8 | 90 | 0.1372 |
63
+ | 0.1629 | 0.8889 | 100 | 0.1276 |
64
+ | 0.0966 | 0.9778 | 110 | 0.1003 |
65
+ | 0.0643 | 1.0667 | 120 | 0.0879 |
66
+ | 0.0726 | 1.1556 | 130 | 0.0872 |
67
+ | 0.0493 | 1.2444 | 140 | 0.0906 |
68
+ | 0.0746 | 1.3333 | 150 | 0.0587 |
69
+ | 0.0473 | 1.4222 | 160 | 0.0561 |
70
+ | 0.0644 | 1.5111 | 170 | 0.0503 |
71
+ | 0.0366 | 1.6 | 180 | 0.0307 |
72
+ | 0.0247 | 1.6889 | 190 | 0.0233 |
73
+ | 0.01 | 1.7778 | 200 | 0.0215 |
74
+ | 0.0393 | 1.8667 | 210 | 0.0122 |
75
+ | 0.0299 | 1.9556 | 220 | 0.0180 |
76
+ | 0.0166 | 2.0444 | 230 | 0.0082 |
77
+ | 0.0319 | 2.1333 | 240 | 0.0083 |
78
+ | 0.0077 | 2.2222 | 250 | 0.0072 |
79
+ | 0.0141 | 2.3111 | 260 | 0.0031 |
80
+ | 0.0017 | 2.4 | 270 | 0.0120 |
81
+ | 0.0015 | 2.4889 | 280 | 0.0153 |
82
+ | 0.0126 | 2.5778 | 290 | 0.0141 |
83
+ | 0.0043 | 2.6667 | 300 | 0.0022 |
84
+ | 0.0068 | 2.7556 | 310 | 0.0019 |
85
+ | 0.0018 | 2.8444 | 320 | 0.0022 |
86
+ | 0.0026 | 2.9333 | 330 | 0.0034 |
87
+ | 0.0017 | 3.0222 | 340 | 0.0076 |
88
+ | 0.0002 | 3.1111 | 350 | 0.0102 |
89
+ | 0.0004 | 3.2 | 360 | 0.0112 |
90
+ | 0.006 | 3.2889 | 370 | 0.0094 |
91
+ | 0.0003 | 3.3778 | 380 | 0.0075 |
92
+ | 0.0003 | 3.4667 | 390 | 0.0069 |
93
+ | 0.0002 | 3.5556 | 400 | 0.0067 |
94
+ | 0.0005 | 3.6444 | 410 | 0.0066 |
95
+ | 0.0003 | 3.7333 | 420 | 0.0072 |
96
+ | 0.0037 | 3.8222 | 430 | 0.0063 |
97
+ | 0.004 | 3.9111 | 440 | 0.0053 |
98
+ | 0.0003 | 4.0 | 450 | 0.0052 |
99
+ | 0.0002 | 4.0889 | 460 | 0.0051 |
100
+ | 0.0002 | 4.1778 | 470 | 0.0050 |
101
+ | 0.0006 | 4.2667 | 480 | 0.0049 |
102
+ | 0.0005 | 4.3556 | 490 | 0.0048 |
103
+ | 0.0002 | 4.4444 | 500 | 0.0051 |
104
+
105
+
106
+ ### Framework versions
107
+
108
+ - PEFT 0.12.0
109
+ - Transformers 4.46.1
110
+ - Pytorch 2.5.1+cu124
111
+ - Datasets 3.1.0
112
+ - Tokenizers 0.20.3