Added step 3500
Browse files
README.md
CHANGED
@@ -18,8 +18,8 @@ library_name: peft
|
|
18 |
---
|
19 |
In case I can't upload the newest step here you can check out this site [models.minipasila.net](https://models.minipasila.net/).
|
20 |
|
21 |
-
(Updated to
|
22 |
-
So this is only the
|
23 |
|
24 |
The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).
|
25 |
|
|
|
18 |
---
|
19 |
In case I can't upload the newest step here you can check out this site [models.minipasila.net](https://models.minipasila.net/).
|
20 |
|
21 |
+
(Updated to 3500th step)
|
22 |
+
So this is only the 3500th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.
|
23 |
|
24 |
The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).
|
25 |
|