PEFT
Safetensors
Transformers
English
Finnish
text-generation-inference
unsloth
llama
trl
mpasila commited on
Commit
0b956fe
1 Parent(s): 5e6f9e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -2
README.md CHANGED
@@ -9,9 +9,25 @@ tags:
9
  license: apache-2.0
10
  language:
11
  - en
 
 
 
 
 
 
12
  ---
 
 
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
 
15
 
16
  - **Developed by:** mpasila
17
  - **License:** apache-2.0
@@ -19,4 +35,4 @@ language:
19
 
20
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
9
  license: apache-2.0
10
  language:
11
  - en
12
+ - fi
13
+ datasets:
14
+ - mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix
15
+ - LumiOpen/instruction-collection-fin
16
+ - Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
17
+ library_name: peft
18
  ---
19
+ (Updated to 1000th step)
20
+ So this is only the 1000th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.
21
 
22
+ The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).
23
+
24
+ Dataset used was [a mix](https://huggingface.co/datasets/mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix) of these:
25
+
26
+ [LumiOpen/instruction-collection-fin](https://huggingface.co/datasets/LumiOpen/instruction-collection-fin)
27
+
28
+ [Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)
29
+
30
+ # Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model
31
 
32
  - **Developed by:** mpasila
33
  - **License:** apache-2.0
 
35
 
36
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
37
 
38
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)