language: en | |
license: cc-by-4.0 | |
tags: | |
- roberta | |
- roberta-base | |
- question-answering | |
- qa | |
- movies | |
datasets: | |
- MIT Movie | |
- SQuAD | |
# roberta-base + Task Transfer (NER) --> Domain-Specific QA | |
Objective: | |
This is Roberta Base without any Domain Adaptive Pretraining --> Then trained for the NER task using MIT Movie Dataset --> Then a changed head to do the SQuAD Task. This makes a QA model capable of answering questions in the movie domain, with additional information coming from a different task (NER - Task Transfer). | |
https://huggingface.co/thatdramebaazguy/roberta-base-MITmovie was used as the Roberta Base + NER model. | |
``` | |
model_name = "thatdramebaazguy/roberta-base-MITmovie-squad" | |
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering") | |
``` | |
## Overview | |
**Language model:** roberta-base | |
**Language:** English | |
**Downstream-task:** NER --> QA | |
**Training data:** MIT Movie, SQuADv1 | |
**Eval data:** MoviesQA (From https://github.com/ibm-aur-nlp/domain-specific-QA) | |
**Infrastructure**: 4x Tesla v100 | |
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh) | |
## Hyperparameters | |
``` | |
Num examples = 88567 | |
Num Epochs = 3 | |
Instantaneous batch size per device = 32 | |
Total train batch size (w. parallel, distributed & accumulation) = 128 | |
``` | |
## Performance | |
### Eval on MoviesQA | |
- eval_samples = 5032 | |
- exact_match = 55.80286 | |
- f1 = 70.31451 | |
### Eval on SQuADv1 | |
- exact_match = 85.6859 | |
- f1 = 91.96064 | |
Github Repo: | |
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/) | |
--- | |