Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
|
57 |
|
58 |
# Notes:
|
59 |
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on o.
|
60 |
-
- Fine-tuned lora with rank = 1 and alpha =
|
61 |
- DoRA
|
62 |
|
63 |
# Improvement
|
|
|
57 |
|
58 |
# Notes:
|
59 |
- For small datasets with narrow content which the model has already done well on our domain, and doesn't want the model to forget the knowledge => Just need to focus on o.
|
60 |
+
- Fine-tuned lora with rank = 1 and alpha = 512, epoch = 1, linear (optim)
|
61 |
- DoRA
|
62 |
|
63 |
# Improvement
|