jonabur
commited on
Commit
•
a7708da
1
Parent(s):
4cd3a61
update README
Browse files
README.md
CHANGED
@@ -12,7 +12,10 @@ language:
|
|
12 |
|
13 |
# Poro 34B Chat
|
14 |
|
15 |
-
Poro 34b chat is a chat-tuned version of [Poro
|
|
|
|
|
|
|
16 |
|
17 |
Because of the limited amount of instruction tuning available for Finnish, documents from the English datasets were machine-translated by the Poro 34B base model into Finnish, then used to train this chat version. We selected only datasets that are available for commercial use and only contain synthetic data if it was gathered in ToS-compliant fashion.
|
18 |
|
@@ -24,8 +27,15 @@ This project is part of an ongoing effort to create open source large language m
|
|
24 |
|
25 |
|
26 |
## Fine Tuning
|
27 |
-
Zephyr--??? TODO
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## Datasets
|
31 |
|
|
|
12 |
|
13 |
# Poro 34B Chat
|
14 |
|
15 |
+
Poro 34b chat is a chat-tuned version of [Poro
|
16 |
+
34B](https://huggingface.co/LumiOpen/Poro-34B) trained to follow instructions
|
17 |
+
in both Finnish and English. A quantized version is also available
|
18 |
+
[here](https://huggingface.co/LumiOpen/Poro-34B-chat-GGUF)
|
19 |
|
20 |
Because of the limited amount of instruction tuning available for Finnish, documents from the English datasets were machine-translated by the Poro 34B base model into Finnish, then used to train this chat version. We selected only datasets that are available for commercial use and only contain synthetic data if it was gathered in ToS-compliant fashion.
|
21 |
|
|
|
27 |
|
28 |
|
29 |
## Fine Tuning
|
|
|
30 |
|
31 |
+
Poro-34b-Chat is an SFT finetune of Poro-34b on a collection of Finnish and
|
32 |
+
English instruction datasets. The collection is made up of roughly of 40%
|
33 |
+
English, 40% Finnish, and 20% cross-lingual entries.
|
34 |
+
|
35 |
+
We finetuned the base model for 3 epochs with a learning rate of 2e-05, warmup
|
36 |
+
ratio of 0.1, and a global batch size of 48. We used the [Alignment Handbook](https://github.com/huggingface/alignment-handbook/)
|
37 |
+
code for finetuning. For full-parameter finetuning, we used 3 nodes (8 GPUs per
|
38 |
+
node).
|
39 |
|
40 |
## Datasets
|
41 |
|