|
--- |
|
license: apache-2.0 |
|
language: |
|
- en |
|
- ru |
|
- multilingual |
|
--- |
|
|
|
|
|
# Model Card for xlm-roberta-large-qa-multilingual-finedtuned-ru |
|
|
|
# Model Details |
|
|
|
## Model Description |
|
|
|
More information needed |
|
|
|
- **Developed by:** Alexander Kaigorodov |
|
- **Shared by [Optional]:** Alexander Kaigorodov |
|
- **Model type:** Question Answering |
|
- **Language(s) (NLP):** English, Russian, Multilingual |
|
- **License:** Apache 2.0 |
|
- **Parent Model:** XLM-RoBERTa |
|
- **Resources for more information:** |
|
- [Associated Paper](https://arxiv.org/pdf/1912.09723.pdf) |
|
|
|
|
|
# Uses |
|
|
|
|
|
## Direct Use |
|
This model can be used for the task of question answering. |
|
|
|
## Downstream Use [Optional] |
|
|
|
More information needed. |
|
|
|
## Out-of-Scope Use |
|
|
|
The model should not be used to intentionally create hostile or alienating environments for people. |
|
|
|
# Bias, Risks, and Limitations |
|
|
|
|
|
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. |
|
|
|
|
|
|
|
## Recommendations |
|
|
|
|
|
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. |
|
|
|
# Training Details |
|
|
|
## Training Data |
|
### XLM-RoBERTa large model whole word masking finetuned on SQuAD |
|
Pretrained model using a masked language modeling (MLM) objective. |
|
Fine tuned on English and Russian QA datasets |
|
|
|
### Used QA Datasets |
|
SQuAD + SberQuAD |
|
|
|
|
|
## Training Procedure |
|
|
|
|
|
### Preprocessing |
|
|
|
More information needed |
|
|
|
### Speeds, Sizes, Times |
|
More information needed |
|
|
|
|
|
# Evaluation |
|
|
|
|
|
## Testing Data, Factors & Metrics |
|
|
|
### Testing Data |
|
|
|
More information needed |
|
|
|
|
|
### Factors |
|
More information needed |
|
|
|
### Metrics |
|
|
|
More information needed |
|
|
|
|
|
## Results |
|
The results obtained are the following (SberQUaD): |
|
``` |
|
f1 = 84.3 |
|
exact_match = 65.3 |
|
``` |
|
|
|
|
|
# Model Examination |
|
|
|
More information needed |
|
|
|
# Environmental Impact |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
|
|
- **Hardware Type:** More information needed |
|
- **Hours used:** More information needed |
|
- **Cloud Provider:** More information needed |
|
- **Compute Region:** More information needed |
|
- **Carbon Emitted:** More information needed |
|
|
|
# Technical Specifications [optional] |
|
|
|
## Model Architecture and Objective |
|
|
|
More information needed |
|
|
|
## Compute Infrastructure |
|
|
|
More information needed |
|
|
|
### Hardware |
|
|
|
|
|
More information needed |
|
|
|
### Software |
|
|
|
More information needed. |
|
|
|
# Citation |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
```bibtex |
|
@incollection{Efimov_2020, |
|
doi = {10.1007/978-3-030-58219-7_1}, |
|
|
|
url = {https://doi.org/10.1007%2F978-3-030-58219-7_1}, |
|
|
|
year = 2020, |
|
publisher = {Springer International Publishing}, |
|
|
|
pages = {3--15}, |
|
|
|
author = {Pavel Efimov and Andrey Chertok and Leonid Boytsov and Pavel Braslavski}, |
|
|
|
title = {{SberQuAD} {\textendash} Russian Reading Comprehension Dataset: Description and Analysis}, |
|
|
|
booktitle = {Lecture Notes in Computer Science} |
|
} |
|
``` |
|
|
|
|
|
|
|
|
|
# Glossary [optional] |
|
More information needed |
|
|
|
# More Information [optional] |
|
More information needed |
|
|
|
|
|
# Model Card Authors [optional] |
|
|
|
Alexander Kaigorodov in collaboration with Ezi Ozoani and the Hugging Face team |
|
|
|
|
|
# Model Card Contact |
|
|
|
More information needed |
|
|
|
# How to Get Started with the Model |
|
|
|
Use the code below to get started with the model. |
|
|
|
<details> |
|
<summary> Click to expand </summary> |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModelForQuestionAnswering |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru") |
|
|
|
model = AutoModelForQuestionAnswering.from_pretrained("AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru") |
|
``` |
|
</details> |
|
|