File size: 3,398 Bytes
d994a40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
---
license: mit
dataset_info:
  features:
  - name: id
    dtype: string
  - name: question
    dtype: string
  - name: context
    dtype: string
  - name: A
    dtype: string
  - name: B
    dtype: string
  - name: C
    dtype: string
  - name: D
    dtype: string
  - name: label
    dtype: int64
  splits:
  - name: train
    num_bytes: 63759933.69322235
    num_examples: 2517
  - name: test
    num_bytes: 52057383.0
    num_examples: 2086
  download_size: 19849080
  dataset_size: 115817316.69322234
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

This dataset is derived from `tau/scrolls` [dataset](tau/scrolls) by running the following script:

```python
import re

from datasets import load_dataset

quality_dataset = load_dataset("tau/scrolls", "quality")


def parse_example(example):
    text = example["input"]
    options = dict(re.findall(r"\((A|B|C|D)\) ([^\n]+)", text))

    question_part, context = re.split(r"\(D\) [^\n]+\n", text, maxsplit=1)
    question = re.sub(r"\([A-D]\) [^\n]+\n?", "", question_part).strip()

    result = {"question": question, "context": context.strip(), **options}

    if not all(key in result for key in ["A", "B", "C", "D"]):
        raise ValueError("One or more options (A, B, C, D) are missing!")

    # get label
    label = -1
    answer = example["output"]
    if answer is None:
        answer = ""

    for idx, option in enumerate([options["A"], options["B"], options["C"], options["D"]]):
        if answer.strip() == option.strip():
            label = idx

    result["label"] = label
    return result


quality_dataset = quality_dataset.map(parse_example)
quality_dataset = quality_dataset.filter(lambda x: x["label"] >= 0)

train_ds = quality_dataset["train"].remove_columns(["pid", "input", "output"])
test_ds = quality_dataset["validation"].remove_columns(["pid", "input", "output"])
```

Specifically, only `quality` subset is kept and processed into MCQ format. The `test` split from original dataset is removed since it doesn't have ground truth labels. 
Instead, validation split is assigned as test.

Number of examples in train: ~2.5k
Number of examples in test: ~2.1k

This dataset can be used to test performance of a model focusing on long contexts. 
Input Tokens as per [llama2](bclavie/bert24_32k_tok_llama2) tokenizer: Mean -> 7.4k, SD: 2.3k, Max -> 11.6k

---
Relevant sections from the [SCROLLS: Standardized CompaRison Over Long Language Sequences paper](https://arxiv.org/pdf/2201.03533)
```
QuALITY (Pang et al., 2021): A multiplechoice question answering dataset over stories
and articles sourced from Project Gutenberg,10 the
Open American National Corpus (Fillmore et al.,
1998; Ide and Suderman, 2004), and more. Experienced writers wrote questions and distractors, and
were incentivized to write answerable, unambiguous questions such that in order to correctly answer
them, human annotators must read large portions
of the given document. To measure the difficulty
of their questions, Pang et al. conducted a speed
validation process, where another set of annotators
were asked to answer questions given only a short
period of time to skim through the document. As
a result, 50% of the questions in QuALITY are
labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong
answer.
```