File size: 2,477 Bytes
83dfa86
295f2ce
83dfa86
 
 
 
 
 
 
 
 
 
295f2ce
83dfa86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
---
language: en
tags:
- pytorch
- question-answering
datasets:
- squad2
metrics:
- exact
- f1
widget:
- text: "What discipline did Winkelmann create?"
  context: "Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art."
---

# roberta-large-finetuned-squad2

## Model description

This model is based on [roberta-large](https://huggingface.co/roberta-large) and was finetuned on [SQuAD2.0](https://rajpurkar.github.io/SQuAD-explorer/). The corresponding papers you can found [here (model)](https://arxiv.org/abs/1907.11692) and [here (data)](https://arxiv.org/abs/1806.03822).


## How to use

```python
from transformers.pipelines import pipeline

model_name = "phiyodr/roberta-large-finetuned-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
inputs = {
    'question': 'What discipline did Winkelmann create?',
    'context': 'Johann Joachim Winckelmann was a German art historian and archaeologist. He was a pioneering Hellenist who first articulated the difference between Greek, Greco-Roman and Roman art. "The prophet and founding hero of modern archaeology", Winckelmann was one of the founders of scientific archaeology and first applied the categories of style on a large, systematic basis to the history of art. '
}
nlp(inputs)
```



## Training procedure

```
{
	"base_model": "roberta-large",
	"do_lower_case": True,
	"learning_rate": 3e-5,
	"num_train_epochs": 4,
	"max_seq_length": 384,
	"doc_stride": 128,
	"max_query_length": 64,
	"batch_size": 96 
}
```

## Eval results

- Data: [dev-v2.0.json](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json)
- Script: [evaluate-v2.0.py](https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/) (original script from [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/README.md))

```
{
  "exact": 84.38473848227069,
  "f1": 87.89711571225455,
  "total": 11873,
  "HasAns_exact": 80.9885290148448,
  "HasAns_f1": 88.02335608157898,
  "HasAns_total": 5928,
  "NoAns_exact": 87.77123633305298,
  "NoAns_f1": 87.77123633305298,
  "NoAns_total": 5945
}
```