Update README.md
Browse files
README.md
CHANGED
@@ -58,7 +58,7 @@ If you are looking for a more accurate (but slightly heavier) model, you can ref
|
|
58 |
|
59 |
The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5)
|
60 |
|
61 |
-
In order to maximize the benefits of the multilingual procedure, [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in [bert-base-italian-
|
62 |
|
63 |
|
64 |
<h3>Training and Performances</h3>
|
|
|
58 |
|
59 |
The 2.0 version further improves the performances by exploiting a 2-phases fine-tuning strategy: the model is first fine-tuned on the English SQuAD v2 (1 epoch, 20% warmup ratio, and max learning rate of 3e-5) then further fine-tuned on the Italian SQuAD (2 epochs, no warmup, initial learning rate of 3e-5)
|
60 |
|
61 |
+
In order to maximize the benefits of the multilingual procedure, [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) is used as a pre-trained model. When the double fine-tuning is completed, the embedding layer is then compressed as in [bert-base-italian-uncased](https://huggingface.co/osiria/bert-base-italian-uncased) to obtain a mono-lingual model size
|
62 |
|
63 |
|
64 |
<h3>Training and Performances</h3>
|