AuriAetherwiing commited on
Commit
2caa7a2
1 Parent(s): 1701067

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -15,10 +15,12 @@ I was trying to diversify Gemma's prose, without completely destroying it's smar
15
  Should be usable both for RP and raw completion storywriting.
16
  I originally planned to use this in a merge, but I feel like this model is interesting enough to be released on it's own as well.
17
 
 
18
  **Training notes.**
19
 
20
  This model was trained for 2 epochs on 10k rows (~18.7M tokens), taken equally from Erebus-87k and r_shortstories_24k datasets. It was trained on 8xH100 SXM node for 30 minutes with rsLoRA.
21
  I got complete nonsense reported to my wandb during this run, and logging stopped altogether after step 13 for some reason. Seems to be directly related to Gemma, as my training setup worked flawlessly for Qwen.
 
22
 
23
  **Format**
24
 
 
15
  Should be usable both for RP and raw completion storywriting.
16
  I originally planned to use this in a merge, but I feel like this model is interesting enough to be released on it's own as well.
17
 
18
+ Model was trained by Auri.
19
  **Training notes.**
20
 
21
  This model was trained for 2 epochs on 10k rows (~18.7M tokens), taken equally from Erebus-87k and r_shortstories_24k datasets. It was trained on 8xH100 SXM node for 30 minutes with rsLoRA.
22
  I got complete nonsense reported to my wandb during this run, and logging stopped altogether after step 13 for some reason. Seems to be directly related to Gemma, as my training setup worked flawlessly for Qwen.
23
+ Thanks to Kearm for helping with setting up LF on that node and to Featherless for providing it for EVA-Qwen2.5 (and this model, unknowingly lol) training.
24
 
25
  **Format**
26