chrisociepa
commited on
Commit
•
d67a4a9
1
Parent(s):
a110083
Update README.md
Browse files
README.md
CHANGED
@@ -34,7 +34,7 @@ Many consumer computers are equipped with good quality graphic cards that can be
|
|
34 |
|
35 |
All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models.
|
36 |
|
37 |
-
It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts. This is why we have not used corpora such as Common Crawl that contain a lot of poor-quality data. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over
|
38 |
|
39 |
## Model
|
40 |
|
@@ -113,7 +113,7 @@ A special tokenizer has been prepared and trained for the purpose of training th
|
|
113 |
|
114 |
Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently.
|
115 |
|
116 |
-
Our close collaboration with the Speakleash team has resulted in the creation of over
|
117 |
|
118 |
Our training dataset contains:
|
119 |
|
@@ -178,7 +178,7 @@ Please cite this model using the following format:
|
|
178 |
@online{AzurroAPT3Base1B,
|
179 |
author = {Krzysztof Ociepa, Azurro},
|
180 |
title = {Introducing APT3-1B-Base: Polish Language Model},
|
181 |
-
year = {
|
182 |
url = {www.azurro.pl/apt3-1b-base-en},
|
183 |
note = {Accessed: 2024-01-04}, % change this date
|
184 |
urldate = {2024-01-04} % change this date
|
|
|
34 |
|
35 |
All the currently available language models have been trained mainly with English corpora with a little bit of other languages, including Polish. The effect is that these models are not the best at dealing with the Polish texts. Even the popular GPT models from OpenAI and Bard from Google often have issues with correct forms. Therefore we have decided to prepare a model based only on the Polish corpus. An additional advantage of using only the Polish corpus is the size of the model - it is better to focus on one language in the case of smaller models.
|
36 |
|
37 |
+
It is important to remember that models are only as good as the data with which they are trained. Given the small size of the model, we trained it with carefully selected texts. This is why we have not used corpora such as Common Crawl that contain a lot of poor-quality data. With close collaboration and advice from the [Speakleash](https://speakleash.org) team, our team has prepared over 285GB of Polish language text corpus that has then been processed and used for training the model. Additionally, the unique feature of our model is that it has been trained on the largest amount of text among all available models for the Polish language.
|
38 |
|
39 |
## Model
|
40 |
|
|
|
113 |
|
114 |
Collecting a large amount of high quality training data is a great challenge. Over the past years at Azurro, we have done a lot of projects connected with processing Big Data. Therefore, with our extensive experience, we have been able to prepare carefully selected training dataset quickly and efficiently.
|
115 |
|
116 |
+
Our close collaboration with the Speakleash team has resulted in the creation of over 285GB of the Polish language text corpus. The process of preparing the training dataset involved transforming documents by applying various cleaning and repairing rules, followed by selecting documents of appropriate quality.
|
117 |
|
118 |
Our training dataset contains:
|
119 |
|
|
|
178 |
@online{AzurroAPT3Base1B,
|
179 |
author = {Krzysztof Ociepa, Azurro},
|
180 |
title = {Introducing APT3-1B-Base: Polish Language Model},
|
181 |
+
year = {2024},
|
182 |
url = {www.azurro.pl/apt3-1b-base-en},
|
183 |
note = {Accessed: 2024-01-04}, % change this date
|
184 |
urldate = {2024-01-04} % change this date
|