neph1 commited on
Commit
a609d9c
1 Parent(s): 2b37b8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -4,6 +4,7 @@ datasets:
4
  - neph1/bellman-7b-finetune
5
  language:
6
  - sv
 
7
  ---
8
 
9
  Qlora trained for 5 epochs on 6400 rows of q&a from around 1000 pages from wikipedia + around 100 of python questions and examples from
@@ -11,6 +12,14 @@ eph1/Alpaca-Lora-GPT4-Swedish-Refined (because I had spent so much time cleaning
11
  gathered examples and some generated using chat-gpt.
12
  Dataset otherwise generated using gpt-3.5-turbo.
13
 
 
 
 
 
 
 
 
 
14
  I may run another 5 epochs on this. But it feels like it's 'aligned' pretty well. (Regular mistral insists Magdalena Andersson is prime minister, still.)
15
 
16
  Example (q8):
 
4
  - neph1/bellman-7b-finetune
5
  language:
6
  - sv
7
+ library_name: peft
8
  ---
9
 
10
  Qlora trained for 5 epochs on 6400 rows of q&a from around 1000 pages from wikipedia + around 100 of python questions and examples from
 
12
  gathered examples and some generated using chat-gpt.
13
  Dataset otherwise generated using gpt-3.5-turbo.
14
 
15
+ Rank: 16
16
+
17
+ Alpha: 16
18
+
19
+ Dropout: 0.1
20
+
21
+ Context length: 1024
22
+
23
  I may run another 5 epochs on this. But it feels like it's 'aligned' pretty well. (Regular mistral insists Magdalena Andersson is prime minister, still.)
24
 
25
  Example (q8):