Correct pre-training data
Browse files
README.md
CHANGED
@@ -24,12 +24,12 @@ Disclaimer: The team releasing CANINE did not write a model card for this model
|
|
24 |
|
25 |
## Model description
|
26 |
|
27 |
-
CANINE is a transformers model pretrained on a large corpus of
|
28 |
|
29 |
* Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-c) is trained with an autoregressive character loss. One masks several character spans within each sequence, which the model then autoregressively predicts.
|
30 |
* Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
|
31 |
|
32 |
-
This way, the model learns an inner representation of
|
33 |
|
34 |
## Intended uses & limitations
|
35 |
|
@@ -57,7 +57,7 @@ sequence_output = outputs.last_hidden_state
|
|
57 |
|
58 |
## Training data
|
59 |
|
60 |
-
The CANINE model was pretrained on the
|
61 |
|
62 |
|
63 |
### BibTeX entry and citation info
|
|
|
24 |
|
25 |
## Model description
|
26 |
|
27 |
+
CANINE is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion, similar to BERT. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
|
28 |
|
29 |
* Masked language modeling (MLM): one randomly masks part of the inputs, which the model needs to predict. This model (CANINE-c) is trained with an autoregressive character loss. One masks several character spans within each sequence, which the model then autoregressively predicts.
|
30 |
* Next sentence prediction (NSP): the model concatenates two sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not.
|
31 |
|
32 |
+
This way, the model learns an inner representation of multiple languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the CANINE model as inputs.
|
33 |
|
34 |
## Intended uses & limitations
|
35 |
|
|
|
57 |
|
58 |
## Training data
|
59 |
|
60 |
+
The CANINE model was pretrained on on the multilingual Wikipedia data of [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md), which includes 104 languages.
|
61 |
|
62 |
|
63 |
### BibTeX entry and citation info
|