Update README.md
Browse files
README.md
CHANGED
@@ -125,7 +125,7 @@ Please see the graph below:
|
|
125 |
|
126 |
The final evaluation cross-entropy ended around 0.4 for this model.
|
127 |
|
128 |
-
The table below shows the best cross-entropy (across all params) for each of the techniques applied. Without the embedding nodes included, the results were usually worse for up to 0.1.
|
129 |
|
130 |
|
131 |
| | Loss on Llama 3.1 fine tuning | Notice |
|
|
|
125 |
|
126 |
The final evaluation cross-entropy ended around 0.4 for this model.
|
127 |
|
128 |
+
The table below shows the best evaluation cross-entropy (across all params) for each of the techniques applied. Without the embedding nodes included, the results were usually worse for up to 0.1.
|
129 |
|
130 |
|
131 |
| | Loss on Llama 3.1 fine tuning | Notice |
|