thatdramebaazguy
commited on
Commit
路
c37e901
1
Parent(s):
e5fe70b
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- MIT Movie (NER Dataset)
|
4 |
+
- SQuAD
|
5 |
+
|
6 |
+
language:
|
7 |
+
- English
|
8 |
+
|
9 |
+
thumbnail:
|
10 |
+
|
11 |
+
tags:
|
12 |
+
- roberta
|
13 |
+
- roberta-base
|
14 |
+
- question-answering
|
15 |
+
- qa
|
16 |
+
- movies
|
17 |
+
|
18 |
+
license: cc-by-4.0
|
19 |
+
|
20 |
+
---
|
21 |
+
# roberta-base + Task Transfer (NER) --> Domain-Specific QA
|
22 |
+
|
23 |
+
Objective:
|
24 |
+
This is Roberta Base without any Domain Adaptive Pretraining --> Then trained for the NER task using MIT Movie Dataset --> Then a changed head to do the SQuAD Task. This makes a QA model capable of answering questions in the movie domain, with additional information coming from a different task (NER - Task Transfer).
|
25 |
+
https://huggingface.co/thatdramebaazguy/roberta-base-MITmovie was used as the Roberta Base + NER model.
|
26 |
+
|
27 |
+
```
|
28 |
+
model_name = "thatdramebaazguy/roberta-base-MITmovie-squad"
|
29 |
+
pipeline(model=model_name, tokenizer=model_name, revision="v1.0", task="question-answering")
|
30 |
+
```
|
31 |
+
|
32 |
+
## Overview
|
33 |
+
**Language model:** roberta-base
|
34 |
+
**Language:** English
|
35 |
+
**Downstream-task:** NER --> QA
|
36 |
+
**Training data:** MIT Movie, SQuADv1
|
37 |
+
**Eval data:** MoviesQA (From https://github.com/ibm-aur-nlp/domain-specific-QA)
|
38 |
+
**Infrastructure**: 4x Tesla v100
|
39 |
+
**Code:** See [example](https://github.com/adityaarunsinghal/Domain-Adaptation/blob/master/scripts/shell_scripts/movieR_NER_squad.sh)
|
40 |
+
|
41 |
+
## Hyperparameters
|
42 |
+
```
|
43 |
+
Num examples = 88567
|
44 |
+
Num Epochs = 3
|
45 |
+
Instantaneous batch size per device = 32
|
46 |
+
Total train batch size (w. parallel, distributed & accumulation) = 128
|
47 |
+
|
48 |
+
```
|
49 |
+
## Performance
|
50 |
+
|
51 |
+
### Eval on MoviesQA
|
52 |
+
- eval_samples = 5032
|
53 |
+
- exact_match = 58.0684
|
54 |
+
- f1 = 71.3717
|
55 |
+
|
56 |
+
Github Repo:
|
57 |
+
- [Domain-Adaptation Project](https://github.com/adityaarunsinghal/Domain-Adaptation/)
|
58 |
+
|
59 |
+
---
|