nenad1002 commited on
Commit
b88bac7
1 Parent(s): be929af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -129,7 +129,7 @@ The final evaluation cross-entropy ended around 0.4 for this model.
129
 
130
  | | Loss on Llama 3.1 fine tuning | Notice
131
  |:------------------|:---------------------------|
132
- | **LORA** | 0.4603 |
133
  | **LORA+** | 0.4011 | The model uploaded here |
134
  | **DORA**| 0.4182 | |
135
  | **qLORA (for 70b model)**| 0.3694 | The model with best evaluation, was too big to optimize it further with with my budget|
 
129
 
130
  | | Loss on Llama 3.1 fine tuning | Notice
131
  |:------------------|:---------------------------|
132
+ | **LORA** | 0.4603 | |
133
  | **LORA+** | 0.4011 | The model uploaded here |
134
  | **DORA**| 0.4182 | |
135
  | **qLORA (for 70b model)**| 0.3694 | The model with best evaluation, was too big to optimize it further with with my budget|