---
language:
- cs
- en
- pl
- sk
- sl
library_name: transformers
license: cc-by-4.0
tags:
- translation
- mt
- marian
- pytorch
- sentence-piece
- many2one
- multilingual
- pivot
- allegro
- laniqo
pipeline_tag: translation
---
# MultiSlav P5-many2eng
This repository contains the model described in the paper [](https://hf.co/papers/2502.14509).
## Multilingual Many-to-English MT Model
___P5-many2eng___ is an Encoder-Decoder vanilla transformer model trained on sentence-level Machine Translation task.
Model is supporting translation from 4 languages: Czech, Polish, Slovak, and Slovene to English.
This model is part of the [___MultiSlav___ collection](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683).
More information will be available soon in our upcoming MultiSlav paper.
Experiments were conducted under research project by [Machine Learning Research](https://ml.allegro.tech/) lab for [Allegro.com](https://ml.allegro.tech/).
Big thanks to [laniqo.com](laniqo.com) for cooperation in the research.
___P5-many2eng___ - _5_-language _Many-to-English_ model translating from all applicable languages to English.\
This model and [_P5-eng2many_](https://huggingface.co/allegro/P5-eng2many) combine into ___P5-eng___ pivot system translating between _5_ languages.
_P5-eng_ translates all supported languages using Many2One model to English bridge sentence
and next using the One2Many model from English bridge sentence to target language.
### Model description
* **Model name:** P5-many2ces
* **Source Languages:** Czech, Polish, Slovak, Slovene
* **Target Language:** English
* **Model Collection:** [MultiSlav](https://huggingface.co/collections/allegro/multislav-6793d6b6419e5963e759a683)
* **Model type:** MarianMTModel Encoder-Decoder
* **License:** CC BY 4.0 (commercial use allowed)
* **Developed by:** [MLR @ Allegro](https://ml.allegro.tech/) & [Laniqo.com](https://laniqo.com/)
### Supported languages
Using model you must specify source language for translation.
Source language tokens are represented as 3-letter ISO 639-3 language codes embedded in a format >>xxx<<.
All accepted directions and their respective tokens are listed below.
Each of them was added as a special token to Sentence-Piece tokenizer.
| **Source Language** | **First token** |
|---------------------|-----------------|
| Czech | `>>ces<<` |
| Polish | `>>pol<<` |
| Slovak | `>>slk<<` |
| Slovene | `>>slv<<` |
## Use case quickstart
Example code-snippet to use model. Due to bug the `MarianMTModel` must be used explicitly.
```python
from transformers import AutoTokenizer, MarianMTModel
m2o_model_name = "Allegro/P5-many2eng"
m2o_tokenizer = AutoTokenizer.from_pretrained(m2o_model_name)
m2o_model = MarianMTModel.from_pretrained(m2o_model_name)
text = ">>pol<<" + " " + "Allegro to internetowa platforma e-commerce, na której swoje produkty sprzedają średnie i małe firmy, jak również duże marki."
translations = m2o_model.generate(**m2o_tokenizer.batch_encode_plus([text], return_tensors="pt"))
bridge_translation = m2o_tokenizer.batch_decode(translations, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(bridge_translation[0])
```
Generated _bridge_ English output:
> Allegro is an online e-commerce platform where medium and small companies as well as large brands sell their products.
To pivot-translate to other languages via _bridge_ English sentence, we need One2Many model.
One2Many model requires explicit target language token as well:
```python
o2m_model_name = "Allegro/P5-eng2many"
o2m_tokenizer = AutoTokenizer.from_pretrained(o2m_model_name)
o2m_model = MarianMTModel.from_pretrained(o2m_model_name)
texts_to_translate = [
">>ces<<" + " " + bridge_translation[0],
">>slk<<" + " " + bridge_translation[0],
">>slv<<" + " " + bridge_translation[0]
]
translation = o2m_model.generate(**o2m_tokenizer.batch_encode_plus(texts_to_translate, return_tensors="pt"))
decoded_translations = o2m_tokenizer.batch_decode(translation, skip_special_tokens=True, clean_up_tokenization_spaces=True)
for trans in decoded_translations:
print(trans)
```
Generated Polish to Czech pivot translation via English:
> Allegro je on-line e-commerce platforma, kde střední a malé firmy, stejně jako velké značky prodávají své produkty.
Generated Polish to Slovak pivot translation via English:
> Allegro je online e-commerce platforma, kde stredné a malé firmy, ako aj veľké značky predávajú svoje produkty.
Generated Polish to Slovene pivot translation via English:
> Allegro je spletna e-poslovanje platforma, kjer srednje in mala podjetja, kot tudi velike blagovne znamke prodajajo svoje izdelke.
## Training
[SentencePiece](https://github.com/google/sentencepiece) tokenizer has a vocab size 80k in total (16k per language). Tokenizer was trained on randomly sampled part of the training corpus.
During the training we used the [MarianNMT](https://marian-nmt.github.io/) framework.\
Base marian configuration used: [transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113).
All training parameters are listed in table below.
### Training hyperparameters:
| **Hyperparameter** | **Value** |
|-----------------------------|------------------------------------------------------------------------------------------------------------|
| Total Parameter Size | 258M |
| Training Examples | 393M |
| Vocab Size | 80k |
| Base Parameters | [Marian transfromer-big](https://github.com/marian-nmt/marian-dev/blob/master/src/common/aliases.cpp#L113) |
| Number of Encoding Layers | 6 |
| Number of Decoding Layers | 6 |
| Model Dimension | 1024 |
| FF Dimension | 4096 |
| Heads | 16 |
| Dropout | 0.1 |
| Batch Size | mini batch fit to VRAM |
| Training Accelerators | 4x A100 40GB |
| Max Length | 100 tokens |
| Optimizer | Adam |
| Warmup steps | 8000 |
| Context | Sentence-level MT |
| Source Languages Supported | Czech, Polish, Slovak, Slovene |
| Target Language Supported | English |
| Precision | float16 |
| Validation Freq | 3000 steps |
| Stop Metric | ChrF |
| Stop Criterion | 20 Validation steps |
## Training corpora
The main research question was: "How does adding additional, related languages impact the quality of the model?" - we explored it in the Slavic language family.
Choosing English model as Bridge Language provides more training examples to model, as English is high-resource data-regime increasing to 393M compared to 269M for [P5-many2ces]().
However, English has a limited morphology compared to Slavic languages, which may lead to loss of information (like gender for the subject in the sentence).
Performance results are mixed compared to [P5-many2ces]().
We only used explicitly open-source data to ensure open-source license of our model.
Datasets were downloaded via [MT-Data](https://pypi.org/project/mtdata/0.2.10/) library. Number of total examples post filtering and deduplication: __393M__.
The datasets used:
| **Corpus** |
|----------------------|
| paracrawl |
| opensubtitles |
| multiparacrawl |
| dgt |
| elrc |
| xlent |
| wikititles |
| wmt |
| wikimatrix |
| dcep |
| ELRC |
| tildemodel |
| europarl |
| eesc |
| eubookshop |
| emea |
| jrc_acquis |
| ema |
| qed |
| elitr_eca |
| EU-dcep |
| rapid |
| ecb |
| kde4 |
| news_commentary |
| kde |
| bible_uedin |
| europat |
| elra |
| wikipedia |
| wikimedia |
| tatoeba |
| globalvoices |
| euconst |
| ubuntu |
| php |
| ecdc |
| eac |
| eac_reference |
| gnome |
| EU-eac |
| books |
| EU-ecdc |
| newsdev |
| khresmoi_summary |
| czechtourism |
| khresmoi_summary_dev |
| worldbank |
## Evaluation
Evaluation of the models was performed on [Flores200](https://huggingface.co/datasets/facebook/flores) dataset.
The table below compares performance of the open-source models and all applicable models from our collection.
Metrics BLEU, ChrF2, and Unbabel/wmt22-comet-da.
Translation results on translation from Polish to Czech (Slavic direction with the __highest__ data-regime):
| **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** |
|------------------------------------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:|
| M2M−100 | 89.6 | 19.8 | 47.7 | 1.2B |
| NLLB−200 | 89.4 | 19.2 | 46.7 | 1.3B |
| Opus Sla-Sla | 82.9 | 14.6 | 42.6 | 64M |
| BiDi-ces-pol (baseline) | 90.0 | 20.3 | 48.5 | 209M |
| P4-pol ◊ | 90.2 | 20.2 | 48.5 | 2x 242M |
| ___P5-eng___ ◊ * | 89.0 | 19.9 | 48.3 | 2x 258M |
| P5-many2ces | 90.3 | 20.2 | 48.6 | 258M |
| MultiSlav-4slav | 90.2 | 20.6 | 48.7 | 242M |
| MultiSlav-5lang | __90.4__ | __20.7__ | __48.9__ | 258M |
Translation results on translation from Slovene to Czech (direction to Czech with the __lowest__ data-regime):
| **Model** | **Comet22** | **BLEU** | **ChrF** | **Model Size** |
|------------------------------------------------------------------------------------|:-----------:|:--------:|:--------:|---------------:|
| M2M−100 | 90.3 | 24.3 | 51.6 | 1.2B |
| NLLB−200 | 90.0 | 22.5 | 49.9 | 1.3B |
| Opus Sla-Sla | 83.5 | 17.4 | 46.0 | 1.3B |
| BiDi-ces-slv (baseline) | 90.0 | 24.4 | 52.0 | 209M |
| P4-pol ◊ | 89.3 | 22.7 | 50.4 | 2x 242M |
| ___P5-eng___ ◊ * | 89.6 | 24.7 | 52.4 | 2x 258M |
| P5-many2ces | 90.3 | 24.9 | 52.4 | 258M |
| MultiSlav-4slav | __90.6__ | __25.3__ | __52.7__ | 242M |
| MultiSlav-5lang | __90.6__ | 25.2 | 52.5 | 258M |
* used entire _P5-eng_ pivot system, including One2Many [P5-eng2many]() model.
◊ system of 2 models *Many2XXX* and *XXX2Many*.
## Limitations and Biases
We did not evaluate inherent bias contained in training datasets. It is advised to validate bias of our models in perspective domain. This might be especially problematic in translation from English to Slavic languages, which require explicitly indicated gender and might hallucinate based on bias present in training data.
## License
The model is licensed under CC BY 4.0, which allows for commercial use.
## Citation
TO BE UPDATED SOON 🤗
## Contact Options
Authors:
- MLR @ Allegro: [Artur Kot](https://linkedin.com/in/arturkot), [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski), [Wojciech Chojnowski](https://linkedin.com/in/wojciech-chojnowski-744702348), [Mieszko Rutkowski](https://linkedin.com/in/mieszko-rutkowski)
- Laniqo.com: [Artur Nowakowski](https://linkedin.com/in/artur-nowakowski-mt), [Kamil Guttmann](https://linkedin.com/in/kamil-guttmann), [Mikołaj Pokrywka](https://linkedin.com/in/mikolaj-pokrywka)
Please don't hesitate to contact authors if you have any questions or suggestions:
- e-mail: artur.kot@allegro.com or mikolaj.koszowski@allegro.com
- LinkedIn: [Artur Kot](https://linkedin.com/in/arturkot) or [Mikołaj Koszowski](https://linkedin.com/in/mkoszowski)