AlexKay's picture
Update README.md
b9dc41d
|
raw
history blame
676 Bytes
metadata
language:
  - en-ru
  - ru
  - multilingual
license: apache-2.0

XLM-RoBERTa large model whole word masking finetuned on SQuAD

Pretrained model on English and Russian languages using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is uncased: it does not make a difference

Used Datasets

SQuAD + SberQuAD

SberQuAD original paper is here! Recommend to read!

Evaluation results

The results obtained are the following (SberQUaD):

f1 = 84.3
exact_match = 65.3