Upload 2 files
Browse files- README.md +238 -155
- multislav-5lang.svg +4 -0
README.md
CHANGED
@@ -1,199 +1,282 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
11 |
|
12 |
-
## Model
|
13 |
|
14 |
-
|
|
|
|
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
-
|
|
|
|
|
19 |
|
20 |
-
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
### Model
|
29 |
|
30 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
-
|
|
|
|
|
|
|
37 |
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
-
### Direct Use
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
|
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
|
167 |
-
|
168 |
|
169 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
170 |
|
171 |
-
|
172 |
|
173 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
174 |
|
175 |
-
**BibTeX:**
|
176 |
|
177 |
-
|
178 |
|
179 |
-
|
180 |
|
181 |
-
|
182 |
|
183 |
-
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
|
189 |
-
##
|
|
|
190 |
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
## Model Card Authors [optional]
|
194 |
|
195 |
-
|
196 |
|
197 |
-
|
|
|
|
|
198 |
|
199 |
-
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
+
language:
|
4 |
+
- cs
|
5 |
+
- en
|
6 |
+
- pl
|
7 |
+
- sk
|
8 |
+
- sl
|
9 |
library_name: transformers
|
10 |
+
tags:
|
11 |
+
- translation
|
12 |
+
- mt
|
13 |
+
- marian
|
14 |
+
- pytorch
|
15 |
+
- sentence-piece
|
16 |
+
- many2many
|
17 |
+
- multilingual
|
18 |
+
- allegro
|
19 |
+
- laniqo
|
20 |
---
|
21 |
|
22 |
+
# MultiSlav MultiSlav-5lang
|
23 |
|
24 |
+
[//]: # (<p align="center">)
|
25 |
|
26 |
+
[//]: # ( <a href="https://ml.allegro.tech/"><img src="allegro-title.svg" alt="MLR @ Allegro.com"></a>)
|
27 |
|
28 |
+
[//]: # (</p>)
|
29 |
|
30 |
+
## Multilingual Many2Many MT Model
|
31 |
|
32 |
+
___MultiSlav-5lang___ is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task.
|
33 |
+
Model is supporting translation between 5 languages: Czech, English, Polish, Slovak, Slovene.
|
34 |
+
This model is part of the [___MultiSlav___ collection](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683). More information will be available soon in our upcoming MultiSlav paper.
|
35 |
|
36 |
+
Experiments were conducted under research project by [Machine Learning Research](https://ml.allegro.tech/) lab for [Allegro.com](https://ml.allegro.tech/).
|
37 |
+
Big thanks to [laniqo.com](laniqo.com) for cooperation in the research.
|
38 |
|
39 |
+
<p align="center">
|
40 |
+
<img src="multislav-5lang.svg">
|
41 |
+
</p>
|
42 |
|
43 |
+
___MultiSlav-5lang___ - translates directly between all supported languages using single Many2Many model as seen on the diagram above.
|
|
|
|
|
|
|
|
|
|
|
|
|
44 |
|
45 |
+
### Model description
|
46 |
|
47 |
+
* **Model name:** multislav-5lang
|
48 |
+
* **Source Languages:** Czech, English, Polish, Slovak, Slovene
|
49 |
+
* **Target Languages:** Czech, English, Polish, Slovak, Slovene
|
50 |
+
* **Model Collection:** [MultiSlav](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683)
|
51 |
+
* **Model type:** MarianMTModel Encoder-Decoder
|
52 |
+
* **License:** CC BY 4.0 (commercial use allowed)
|
53 |
+
* **Developed by:** [MLR @ Allegro](https://ml.allegro.tech/) & [Laniqo.com](https://laniqo.com/)
|
54 |
|
55 |
+
### Supported languages
|
|
|
|
|
56 |
|
57 |
+
Using model you must specify target language for translation.
|
58 |
+
Target language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<.
|
59 |
+
All accepted directions and their respective tokens are listed below.
|
60 |
+
Each of them was added as a special token to Sentence-Piece tokenizer.
|
61 |
|
62 |
+
| **Target Language** | **First token** |
|
63 |
+
|---------------------|-----------------|
|
64 |
+
| Czech | `>>ces<<` |
|
65 |
+
| English | `>>eng<<` |
|
66 |
+
| Polish | `>>pol<<` |
|
67 |
+
| Slovak | `>>slk<<` |
|
68 |
+
| Slovene | `>>slv<<` |
|
69 |
|
|
|
70 |
|
71 |
+
## Use case quickstart
|
72 |
|
73 |
+
Example code-snippet to use model. Due to bug the `MarianMTModel` must be used explicitly.
|
74 |
|
75 |
+
```python
|
76 |
+
from transformers import AutoTokenizer, MarianMTModel
|
77 |
|
78 |
+
model_name = "Allegro/MultiSlav-5lang"
|
79 |
|
80 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
81 |
+
model = MarianMTModel.from_pretrained(model_name)
|
82 |
|
83 |
+
text = "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."
|
84 |
+
target_languages = ["ces", "eng", "slk", "slv"]
|
85 |
+
batch_to_translate = [
|
86 |
+
f">>{lang}<<" + " " + text for lang in target_languages
|
87 |
+
]
|
88 |
+
|
89 |
+
translations = model.generate(**tokenizer.batch_encode_plus(batch_to_translate, return_tensors="pt"))
|
90 |
+
decoded_translations = tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)
|
91 |
+
for trans in decoded_translations:
|
92 |
+
print(trans)
|
93 |
+
```
|
94 |
+
Generated outputs:
|
95 |
+
|
96 |
+
Czech output:
|
97 |
+
> Allegro je on-line e-commerce platforma, na které své produkty prodávají střední a malé firmy, stejně jako velké značky.
|
98 |
+
|
99 |
+
English output:
|
100 |
+
> Allegro is an online e-commerce platform on which medium and small companies as well as large brands sell their products.
|
101 |
+
|
102 |
+
Slovak output:
|
103 |
+
> Allegro je internetová e-commerce platforma, na ktorej svoje produkty predávajú stredné a malé podniky, ako aj veľké značky.
|
104 |
+
|
105 |
+
Slovene output:
|
106 |
+
> Allegro je spletna platforma za e-poslovanje, na kateri srednje velika in mala podjetja ter velike blagovne znamke prodajajo svoje izdelke.
|
107 |
+
|
108 |
+
The model is also capable of translating into Polish language, following the same pattern:
|
109 |
+
```python
|
110 |
+
text = ">>pol<<" + " " + "Allegro is an online e-commerce platform on which medium and small companies as well as large brands sell their products."
|
111 |
+
translation = model.generate(**tokenizer.batch_encode_plus([text], return_tensors="pt"))
|
112 |
+
decoded_translation = tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True)
|
113 |
+
|
114 |
+
print(decoded_translation[0])
|
115 |
+
```
|
116 |
+
|
117 |
+
Generated Polish output:
|
118 |
+
> Allegro to internetowa platforma e-commerce, na której sprzedają swoje produkty średnie i małe firmy, a także duże marki.
|
119 |
+
|
120 |
+
## Training
|
121 |
+
|
122 |
+
[SentencePiece](https://github.com/google/sentencepiece) tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus.
|
123 |
+
During the training we used the [MarianNMT](https://marian-nmt.github.io/) framework.
|
124 |
+
Base marian configuration used: [transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113).
|
125 |
+
All training parameters are listed in table below.
|
126 |
+
|
127 |
+
### Training hyperparameters:
|
128 |
+
|
129 |
+
| **Hyperparameter** | **Value** |
|
130 |
+
|----------------------------|------------------------------------------------------------------------------------------------------------|
|
131 |
+
| Total Parameter Size | 258M |
|
132 |
+
| Training Examples | 578M |
|
133 |
+
| Vocab Size | 80k |
|
134 |
+
| Base Parameters | [Marian transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113) |
|
135 |
+
| Number of Encoding Layers | 6 |
|
136 |
+
| Number of Decoding Layers | 6 |
|
137 |
+
| Model Dimension | 1024 |
|
138 |
+
| FF Dimension | 4096 |
|
139 |
+
| Heads | 16 |
|
140 |
+
| Dropout | 0.1 |
|
141 |
+
| Batch Size | mini batch fit to VRAM |
|
142 |
+
| Training Accelerators | 4x A100 40GB |
|
143 |
+
| Max Length | 100 tokens |
|
144 |
+
| Optimizer | Adam |
|
145 |
+
| Warmup steps | 8000 |
|
146 |
+
| Context | Sentence-level MT |
|
147 |
+
| Source Languages Supported | Czech, English, Polish, Slovak, Slovene |
|
148 |
+
| Target Languages Supported | Czech, English, Polish, Slovak, Slovene |
|
149 |
+
| Precision | float16 |
|
150 |
+
| Validation Freq | 3000 steps |
|
151 |
+
| Stop Metric | ChrF |
|
152 |
+
| Stop Criterion | 20 Validation steps |
|
153 |
+
|
154 |
+
|
155 |
+
## Training corpora
|
156 |
+
|
157 |
+
<p align="center">
|
158 |
+
<img src="./multi5-data.svg">
|
159 |
+
</p>
|
160 |
+
|
161 |
+
The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family.
|
162 |
+
In this model we experimented by additionally adding English <-> Slavic parallel corpora to further increase open-source data-regime.
|
163 |
+
We found that additional data clearly improved performance compared to the bi-directional baseline models, and compared to pivot models and MultiSlav-4slav in the most of the directions.
|
164 |
+
For example in translation from Polish to Czech, this allowed us to expand training data-size from 63M to 578M examples, and from 18M to 578M for Slovak to Slovene translation.
|
165 |
+
|
166 |
+
We only used explicitly open-source data to ensure open-source license of our model.
|
167 |
+
Datasets were downloaded via [MT-Data](https://pypi.org/project/mtdata/0.2.10/) library. Number of total examples post filtering and deduplication: __578M__.
|
168 |
+
|
169 |
+
The datasets used and data amount prior to filtering and deduplication:
|
170 |
+
|
171 |
+
| **Corpus** | **Data Size** |
|
172 |
+
|----------------------|--------------:|
|
173 |
+
| paracrawl | 246407901 |
|
174 |
+
| opensubtitles | 167583218 |
|
175 |
+
| multiparacrawl | 52388826 |
|
176 |
+
| dgt | 36403859 |
|
177 |
+
| elrc | 29687222 |
|
178 |
+
| xlent | 18375223 |
|
179 |
+
| wikititles | 12936394 |
|
180 |
+
| wmt | 11074816 |
|
181 |
+
| wikimatrix | 10435588 |
|
182 |
+
| dcep | 10239150 |
|
183 |
+
| ELRC | 7609067 |
|
184 |
+
| tildemodel | 6309369 |
|
185 |
+
| europarl | 6088362 |
|
186 |
+
| eesc | 5604672 |
|
187 |
+
| eubookshop | 3732718 |
|
188 |
+
| emea | 3482661 |
|
189 |
+
| jrc_acquis | 2920805 |
|
190 |
+
| ema | 1881408 |
|
191 |
+
| qed | 1835208 |
|
192 |
+
| elitr_eca | 1398536 |
|
193 |
+
| EU-dcep | 1132950 |
|
194 |
+
| rapid | 1016905 |
|
195 |
+
| ecb | 885442 |
|
196 |
+
| kde4 | 541944 |
|
197 |
+
| news_commentary | 498432 |
|
198 |
+
| kde | 473269 |
|
199 |
+
| bible_uedin | 429692 |
|
200 |
+
| europat | 358911 |
|
201 |
+
| elra | 357696 |
|
202 |
+
| wikipedia | 352118 |
|
203 |
+
| wikimedia | 201088 |
|
204 |
+
| tatoeba | 91251 |
|
205 |
+
| globalvoices | 69736 |
|
206 |
+
| euconst | 65507 |
|
207 |
+
| ubuntu | 47301 |
|
208 |
+
| php | 44031 |
|
209 |
+
| ecdc | 21154 |
|
210 |
+
| eac | 20224 |
|
211 |
+
| eac_reference | 10099 |
|
212 |
+
| gnome | 4466 |
|
213 |
+
| EU-eac | 2925 |
|
214 |
+
| books | 2816 |
|
215 |
+
| EU-ecdc | 2210 |
|
216 |
+
| newsdev | 1953 |
|
217 |
+
| khresmoi_summary | 889 |
|
218 |
+
| czechtourism | 832 |
|
219 |
+
| khresmoi_summary_dev | 455 |
|
220 |
+
| worldbank | 189 |
|
221 |
|
222 |
## Evaluation
|
223 |
|
224 |
+
Evaluation of the models was performed on [Flores200](https://huggingface.co/datasets/facebook/flores) dataset.
|
225 |
+
The table below compares performance of the open-source models and all applicable models from our collection.
|
226 |
+
Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
227 |
|
228 |
+
Translation results on translation from Polish to Czech (Slavic direction with the __highest__ data-regime):
|
229 |
|
230 |
+
| **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** |
|
231 |
+
|-----------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:|
|
232 |
+
| M2M−100 | 89.6 | 19.8 | 47.7 | 1.2B |
|
233 |
+
| NLLB−200 | 89.4 | 19.2 | 46.7 | 1.3B |
|
234 |
+
| Opus Sla-Sla | 82.9 | 14.6 | 42.6 | 64M |
|
235 |
+
| ALMA-13B-R | WIP | WIP | WIP | 13B |
|
236 |
+
| BiDi-ces-pol (baseline) | 90.0 | 20.3 | 48.5 | 209M |
|
237 |
+
| P4-pol <span style="color:red;">◊</span> | 90.2 | 20.2 | 48.5 | 2x 242M |
|
238 |
+
| P5-eng <span style="color:red;">◊</span> | 89.0 | 19.9 | 48.3 | 2x 258M |
|
239 |
+
| P5-ces <span style="color:red;">◊</span> | 90.3 | 20.2 | 48.6 | 2x 258M |
|
240 |
+
| MultiSlav-4slav | 90.2 | 20.6 | 48.7 | 242M |
|
241 |
+
| ___MultiSlav-5lang___ <span style="color:green;">*</span> | __90.4__ | __20.7__ | __48.9__ | 258M |
|
242 |
|
243 |
+
Translation results on translation from Slovak to Slovene (Slavic direction with the __lowest__ data-regime):
|
244 |
|
245 |
+
| **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** |
|
246 |
+
|-----------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:|
|
247 |
+
| M2M−100 | 89.6 | 26.6 | 55.0 | 1.2B |
|
248 |
+
| NLLB−200 | 88.8 | 23.3 | 42.0 | 1.3B |
|
249 |
+
| BiDi-slk-slv (baseline) | 89.4 | 26.6 | 55.4 | 209M |
|
250 |
+
| P4-pol <span style="color:red;">◊</span> | 88.4 | 24.8 | 53.2 | 2x 242M |
|
251 |
+
| P5-eng <span style="color:red;">◊</span> | 88.5 | 25.6 | 54.6 | 2x 258M |
|
252 |
+
| P5-ces <span style="color:red;">◊</span> | 89.8 | 26.6 | 55.3 | 2x 258M |
|
253 |
+
| MultiSlav-4slav | 90.1 | __27.1__ | __55.7__ | 242M |
|
254 |
+
| ___MultiSlav-5lang___ <span style="color:green;">*</span> | __90.2__ | __27.1__ | __55.7__ | 258M |
|
255 |
|
|
|
256 |
|
257 |
+
<span style="color:green;">*</span> this model
|
258 |
|
259 |
+
<span style="color:red;">◊</span> system of 2 models *Many2XXX* and *XXX2Many*, see [P5-ces2many](https://huggingface.co/allegro/P5-ces2many)
|
260 |
|
261 |
+
## Limitations and Biases
|
262 |
|
263 |
+
We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.
|
264 |
|
265 |
+
## License
|
266 |
|
267 |
+
The model is licensed under CC BY 4.0, which allows for commercial use.
|
268 |
|
269 |
+
## Citation
|
270 |
+
TO BE UPDATED SOON 🤗
|
271 |
|
|
|
272 |
|
|
|
273 |
|
274 |
+
## Contact Options
|
275 |
|
276 |
+
Authors:
|
277 |
+
- MLR @ Allegro: [Artur Kot](https://linkedin.com/in/arturkot), [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski), [Wojciech Chojnowski](https://linkedin.com/in/wojciech-chojnowski-744702348), [Mieszko Rutkowski](https://linkedin.com/in/mieszko-rutkowski)
|
278 |
+
- Laniqo.com: [Artur Nowakowski](https://linkedin.com/in/artur-nowakowski-mt), [Kamil Guttmann](https://linkedin.com/in/kamil-guttmann), [Mikołaj Pokrywka](https://linkedin.com/in/mikolaj-pokrywka)
|
279 |
|
280 |
+
Please don't hesitate to contact authors if you have any questions or suggestions:
|
281 |
+
- e-mail: [email protected] or [email protected]
|
282 |
+
- LinkedIn: [Artur Kot](https://linkedin.com/in/arturkot) or [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski)
|
multislav-5lang.svg
ADDED
|