blaze-it / README.md
osiria's picture
Update README.md
93fd74e
|
raw
history blame
4.66 kB
---
license: apache-2.0
datasets:
- wikipedia
language:
- it
widget:
- text: "milano è una [MASK] dell'italia"
example_title: "Example 1"
- text: "il sole è una [MASK] della via lattea"
example_title: "Example 2"
- text: "l'italia è una [MASK] dell'unione europea"
example_title: "Example 3"
---
--------------------------------------------------------------------------------------------------
<h1>Model: BLAZE 🔥</h1>
<p>Language: IT<br>Version: 𝖬𝖪-𝖨</p>
--------------------------------------------------------------------------------------------------
<h3>Introduction</h3>
This model is a <b>lightweight</b> and uncased version of <b>BERT</b> <b>[1]</b> for the <b>italian</b> language. With its <b>55M parameters</b> and <b>220MB</b> size,
it's <b>50% lighter</b> than a typical mono-lingual BERT model, and ideal
for circumstances where memory consumption and execution speed are critical aspects, while maintaining high quality results.
<h3>Model description</h3>
The model has been obtained by taking the multilingual <b>DistilBERT</b> <b>[2]</b> model (from the HuggingFace team: [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased)) as a starting point,
and then focusing it on the italian language while at the same time turning it into an uncased model by modifying the embedding layer
(as in <b>[3]</b>, but computing document-level frequencies over the <b>Wikipedia</b> dataset and setting a frequency threshold of 0.1%), which brings a considerable
reduction in the number of parameters.
In order to compensate for the deletion of cased tokens, which now forces the model to exploit lowercase representations of words which were previously capitalized,
the model has been further pre-trained on the italian split of the [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset, using the <b>whole word masking [4]</b> technique to make it more robust
with respect to the new uncased representations.
The resulting model has 55M parameters, a vocabulary of 13.832 tokens, and a size of 220MB, which makes it <b>50% lighter</b> than a typical mono-lingual BERT model and
20% lighter than a typical mono-lingual DistilBERT model.
<h3>Training procedure</h3>
The model has been trained for <b>masked language modeling</b> on the italian <b>Wikipedia</b> (~3GB) dataset for 10K steps, using the AdamW optimizer, with a batch size of 512
(obtained through 128 gradient accumulation steps),
a sequence length of 512, and a linearly decaying learning rate starting from 5e-5. The training has been performed using <b>dynamic masking</b> between epochs and
exploiting the <b>whole word masking</b> technique.
<h3>Performances</h3>
The following metrics have been computed on the Part of Speech Tagging and Named Entity Recognition tasks, using the <b>UD Italian ISDT</b> and <b>WikiNER</b> datasets, respectively.
The PoST model has been trained for 5 epochs, and the NER model for 3 epochs, both with a constant learning rate, fixed at 1e-5. For Part of Speech Tagging, the metrics have been computed on the default test set
provided with the dataset, while for Named Entity Recognition the metrics have been computed with a 5-fold cross-validation
| Task | Recall | Precision | F1 |
| ------ | ------ | ------ | ------ |
| Part of Speech Tagging | 97.48 | 97.29 | 97.37 |
| Named Entity Recognition | 89.29 | 89.84 | 89.53 |
The metrics have been computed at token level and macro-averaged over the classes.
<h3>Demo</h3>
You can try the model online (fine-tuned on named entity recognition) using this webapp:
<h3>Quick usage</h3>
```python
from transformers import AutoTokenizer, AutoModel
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("osiria/blaze-it")
model = AutoModel.from_pretrained("osiria/blaze-it")
pipeline_mlm = pipeline(task="fill-mask", model=model, tokenizer=tokenizer)
```
<h3>Limitations</h3>
This lightweight model is mainly trained on Wikipedia, so it's particularly suitable as an agile analyzer for large volumes of natively digital text taken
from the world wide web, written in a correct and fluent form (like wikis, web pages, news, etc.). It may show limitations when it comes to chaotic text, containing errors and slang expressions
(like social media posts) or when it comes to domain-specific text (like medical, financial or legal content).
<h3>References</h3>
[1] https://arxiv.org/abs/1810.04805
[2] https://arxiv.org/abs/1910.01108
[3] https://arxiv.org/abs/2010.05609
[4] https://arxiv.org/abs/1906.08101
<h3>License</h3>
The model is released under <b>Apache-2.0</b> license