Update README.md
Browse files
README.md
CHANGED
@@ -127,14 +127,14 @@ The final evaluation cross-entropy ended around 0.4 for this model.
|
|
127 |
|
128 |
|
129 |
|
130 |
-
| | Loss |
|
131 |
|:------------------|:---------------------------|
|
132 |
| **LORA** | 0.4603 |
|
133 |
-
| **LORA+** | 0.4011 |
|
134 |
-
| **DORA**| 0.4182 |
|
135 |
-
| **qLORA (for 70b model)**| 0.3694 |
|
136 |
-
| **qLORA (for 8b model)**| 0.5471 |
|
137 |
-
| **(LO)ReFT**| 0.4824 |
|
138 |
|
139 |
|
140 |
#### Metrics
|
|
|
127 |
|
128 |
|
129 |
|
130 |
+
| | Loss on Llama 3.1 fine tuning | Notice
|
131 |
|:------------------|:---------------------------|
|
132 |
| **LORA** | 0.4603 |
|
133 |
+
| **LORA+** | 0.4011 | The model uploaded here |
|
134 |
+
| **DORA**| 0.4182 | |
|
135 |
+
| **qLORA (for 70b model)**| 0.3694 | The model with best evaluation, was too big to optimize it further with with my budget|
|
136 |
+
| **qLORA (for 8b model)**| 0.5471 | |
|
137 |
+
| **(LO)ReFT**| 0.4824 | |
|
138 |
|
139 |
|
140 |
#### Metrics
|