Update README.md
Browse files
README.md
CHANGED
@@ -129,7 +129,7 @@ The final evaluation cross-entropy ended around 0.4 for this model.
|
|
129 |
|
130 |
| | Loss on Llama 3.1 fine tuning | Notice
|
131 |
|:------------------|:---------------------------|
|
132 |
-
| **LORA** | 0.4603 |
|
133 |
| **LORA+** | 0.4011 | The model uploaded here |
|
134 |
| **DORA**| 0.4182 | |
|
135 |
| **qLORA (for 70b model)**| 0.3694 | The model with best evaluation, was too big to optimize it further with with my budget|
|
|
|
129 |
|
130 |
| | Loss on Llama 3.1 fine tuning | Notice
|
131 |
|:------------------|:---------------------------|
|
132 |
+
| **LORA** | 0.4603 | |
|
133 |
| **LORA+** | 0.4011 | The model uploaded here |
|
134 |
| **DORA**| 0.4182 | |
|
135 |
| **qLORA (for 70b model)**| 0.3694 | The model with best evaluation, was too big to optimize it further with with my budget|
|