PEFT
Safetensors
Transformers
English
Finnish
text-generation-inference
unsloth
llama
trl
File size: 1,861 Bytes
473b870
 
 
 
 
 
 
 
 
 
 
0b956fe
 
 
 
 
 
473b870
40b91eb
 
473b870
0b956fe
 
 
 
 
 
 
 
d4f93b0
 
3db14ce
 
0b956fe
473b870
 
 
 
 
 
 
0b956fe
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
base_model: Finnish-NLP/Ahma-7B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
- fi
datasets:
- mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix
- LumiOpen/instruction-collection-fin
- Gryphe/Sonnet3.5-SlimOrcaDedupCleaned
library_name: peft
---
(Updated to 3500th step)
So this is only the 3500th step (out of 3922) trained on Google Colab because I'm a little low on money but at least that's free.. While testing the LoRA it seems to perform fairly well. The only real issue with this base model is that it only has 2048 token context size.

The trained formatting should be ChatML but it seemed to work better with Mistral's formatting for some reason (could be just due to me not having merged the model yet).

Dataset used was [a mix](https://huggingface.co/datasets/mpasila/LumiOpenInstruct-GrypheSlimOrca-Mix) of these:

[LumiOpen/instruction-collection-fin](https://huggingface.co/datasets/LumiOpen/instruction-collection-fin)

[Gryphe/Sonnet3.5-SlimOrcaDedupCleaned](https://huggingface.co/datasets/Gryphe/Sonnet3.5-SlimOrcaDedupCleaned)

Merged: [mpasila/Ahma-SlimInstruct-V0.1-7B](https://huggingface.co/mpasila/Ahma-SlimInstruct-V0.1-7B)

After I'm done training this I will probably try do continued pre-training on Gemma 2 2B. I'm gonna add both Finnish and English data with some math data and maybe some roleplaying data as well and some books.

# Uploaded Ahma-SlimInstruct-LoRA-V0.1-7B model

- **Developed by:** mpasila
- **License:** apache-2.0
- **Finetuned from model :** Finnish-NLP/Ahma-7B

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)