Datasets:

Modalities:
Text
Languages:
English
ArXiv:
Libraries:
Datasets
License:
File size: 11,340 Bytes
64c867b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aebcb19
64c867b
aebcb19
64c867b
 
 
6aacfb8
64c867b
 
 
14aa1a8
64c867b
 
 
e738bcb
64c867b
 
 
 
 
6e059a4
 
 
 
 
6aacfb8
14aa1a8
6e059a4
14aa1a8
6e059a4
6aacfb8
14aa1a8
6e059a4
 
 
64c867b
 
 
6e059a4
 
 
 
 
 
 
 
 
 
14aa1a8
6e059a4
64c867b
 
 
14aa1a8
 
 
 
 
 
 
 
6aacfb8
14aa1a8
 
 
 
 
 
 
 
 
64c867b
 
 
 
 
6aacfb8
 
 
 
 
 
 
 
 
 
 
 
64c867b
 
 
 
 
6aacfb8
 
 
64c867b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aebcb19
 
 
 
e738bcb
 
6aacfb8
e738bcb
 
 
 
6aacfb8
e738bcb
 
 
aebcb19
 
64c867b
 
 
 
 
 
 
aebcb19
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
---
YAML tags:
annotations_creators:
- expert-generated
- crowdsourced
language_creators:
- expert-generated
languages:
- en-US
licenses:
- other-individual-licenses
multilinguality:
- monolingual
pretty_name: ''
size_categories:
- unknown
source_datasets:
- original
- extended|ade_corpus_v2
- extended|banking77
task_categories:
- text-classification
task_ids:
- multi-class-classification
---

# Dataset Card for RAFT

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Homepage:** raft.elicit.org
- **Repository:** https://huggingface.co/datasets/ought/raft
- **Paper:** forthcoming
- **Leaderboard:** https://huggingface.co/spaces/ought/raft-leaderboard
- **Point of Contact:** [email protected]

### Dataset Summary

The Real-world Annotated Few-shot Tasks (RAFT) dataset is an aggregation of English-language datasets found in the real world. Associated with each dataset is a binary or multiclass classification task, intended to improve our understanding of how language models perform on tasks that have concrete, real-world value. Only 50 labeled examples are provided in each dataset.

### Supported Tasks and Leaderboards

- `text-classification`: Each subtask in RAFT is a text classification task, and the provided train and test sets can be used to submit to the [RAFT Leaderboard](https://huggingface.co/spaces/ought/raft-leaderboard) To prevent overfitting and tuning on a held-out test set, the leaderboard is only evaluated once per week. Each task has its macro-f1 score calculated, then those scores are averaged to produce the overall leaderboard score. 

### Languages

RAFT is entirely in American English (en-US).

## Dataset Structure

### Data Instances


| Dataset      | First Example |
| ----------- | ----------- |
| Ade Corpus V2 | <pre>Sentence: No regional side effects were noted.<br>ID: 0<br>Label: 2</pre> |
| Banking 77 | <pre>Query: Is it possible for me to change my PIN number?<br>ID: 0<br>Label: 23<br></pre> |
| NeurIPS Impact Statement Risks  | <pre>Paper title: Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation...<br>Paper link: https://proceedings.neurips.cc/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-Paper.pdf...<br>Impact statement: This work makes the first attempt to search for all key components of panoptic pipeline and manages to accomplish this via the p...<br>ID: 0<br>Label: 1</pre> |
| One Stop English  | <pre>Article: For 85 years, it was just a grey blob on classroom maps of the solar system. But, on 15 July, Pluto was seen in high resolution ...<br>ID: 0<br>Label: 3<br></pre> |
| Overruling  | <pre>Sentence: in light of both our holding today and previous rulings in johnson, dueser, and gronroos, we now explicitly overrule dupree....<br>ID: 0<br>Label: 2<br></pre> |
| Semiconductor Org Types  | <pre>Paper title: 3Gb/s AC-coupled chip-to-chip communication using a low-swing pulse receiver...<br>Organization name: North Carolina State Univ.,Raleigh,NC,USA<br>ID: 0<br>Label: 3<br></pre> |
| Systematic Review Inclusion  | <pre>Title: Prototyping and transforming facial textures for perception research...<br>Abstract: Wavelet based methods for prototyping facial textures for artificially transforming the age of facial images were described. Pro...<br>Authors: Tiddeman, B.; Burt, M.; Perrett, D.<br>Journal: IEEE Comput Graphics Appl<br>ID: 0<br>Label: 2</pre> |
| TAI Safety Research  |  <pre>Title: Malign generalization without internal search<br>Abstract Note: In my last post, I challenged the idea that inner alignment failures should be explained by appealing to agents which perform ex...<br>Url: https://www.alignmentforum.org/posts/ynt9TD6PrYw6iT49m/malign-generalization-without-internal-search...<br>Publication Year: 2020<br>Item Type: blogPost<br>Author: Barnett, Matthew<br>Publication Title: AI Alignment Forum<br>ID: 0<br>Label: 1</pre> | 
| Terms Of Service | <pre>Sentence: Crowdtangle may change these terms of service, as described above, notwithstanding any provision to the contrary in any agreemen...<br>ID: 0<br>Label: 2<br></pre> |
| Tweet Eval Hate  | <pre>Tweet: New to Twitter-- any men on here know what the process is to get #verified?...<br>ID: 0<br>Label: 2<br></pre> |
| Twitter Complaints  | <pre>Tweet text: @HMRCcustomers No this is my first job<br>ID: 0<br>Label: 2</pre> |


### Data Fields

The ID field is used for indexing data points. It will be used to match your submissions with the true test labels, so you must include it in your submission. All other columns contain textual data. Some contain links and URLs to websites on the internet.

All output fields are designated with the "Label" column header. The 0 value in this column indicates that the entry is unlabeled, and should only appear in the unlabeled test set. Other values in this column are various other labels. To get their textual value for a given dataset:
```
# Load the dataset
dataset = datasets.load_dataset("ought/raft", "ade_corpus_v2")
# First, get the object that holds information about the "Label" feature in the dataset.
label_info = dataset.features["Label"]
# Use the int2str method to access the textual labels.
print([label_info.int2str(i) for i in (0, 1, 2)])
# ['Unlabeled', 'ADE-related', 'not ADE-related']
```

### Data Splits

There are two splits provided: train data and unlabeled test data.

The training examples were chosen at random. No attempt was made to ensure that classes were balanced or proportional in the training data -- indeed, the Banking 77 task with 77 different classes if used cannot fit all of its classes into the 50 training examples.

| Dataset                        | Train Size | Test Size |   |
|--------------------------------|------------|-----------|---|
| Ade Corpus V2                  | 50         | 5000      |   |
| Banking 77                     | 50         | 5000      |   |
| NeurIPS Impact Statement Risks | 50         | 150       |   |
| One Stop English               | 50         | 518       |   |
| Overruling                     | 50         | 2350      |   |
| Semiconductor Org Types        | 50         | 449       |   |
| Systematic Review Inclusion    | 50         | 2244      |   |
| TAI Safety Research            | 50         | 1639      |   |
| Terms Of Service               | 50         | 5000      |   |
| Tweet Eval Hate                | 50         | 2966      |   |
| Twitter Complaints             | 50         | 3399      |   |
| **Total**                      | **550**    | **28715** |   |

## Dataset Creation

### Curation Rationale

Generally speaking, the rationale behind RAFT was to create a benchmark for evaluating NLP models that didn't consist of contrived or artificial data sources, for which the tasks weren't originally assembled for the purpose of testing NLP models. However, each individual dataset in RAFT was collected independently. For the majority of datasets, we only collected them second-hand from existing curated sources. The datasets that we curated are: 
* NeurIPS impact statement risks
* Semiconductor org types
* TAI Safety Research

Each of these three datasets was sourced from our existing collaborators at Ought. They had used our service, Elicit, to analyze their dataset in the past, and we contact them to include their dataset and the associated classification task in the benchmark. For all datasets, more information is provided in our paper. For the ones which we did not curate, we provide a link to the dataset. For the ones which we did, we provide a datasheet that elaborates on many of the topics here in greater detail. 

For the three datasets that we introduced:
* **NeurIPS impact statement risks** The dataset was created to evaluate the then new requirement for authors to include an "impact statement" in their 2020 NeurIPS papers. Had it been successful? What kind of things did authors mention the most? How long were impact statements on average? Etc.
* **Semiconductor org types** The dataset was originally created to understand better which countries’ organisations have contributed most to semiconductor R\&D over the past 25 years using three main conferences. Moreover, to estimate the share of academic and private sector contributions, the organisations were classified as “university”, “research institute” or “company”.
* **TAI Safety Research** The primary motivations for assembling this database were to: (1) Aid potential donors in assessing organizations focusing on TAI safety by collecting and analyzing their research output. (2) Assemble a comprehensive bibliographic database that can be used as a base for future projects, such as a living review of the field.


### Source Data

#### Initial Data Collection and Normalization

* **NeurIPS impact statement risks** 
* **Semiconductor org types** 
* **TAI Safety Research** 
#### Who are the source language producers?

[More Information Needed]

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

[More Information Needed]

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

[More Information Needed]

### Licensing Information

RAFT aggregates many other datasets, each of which is provided under its own license. Generally, those licenses permit research and commercial use.

| Dataset      | License |
| ----------- | ----------- |
| Ade Corpus V2 | Unlicensed |
| Banking 77 | CC BY 4.0 |
| NeurIPS Impact Statement Risks  | MIT License/CC BY 4.0 |
| One Stop English  | CC BY-SA 4.0 |
| Overruling  | Unlicensed |
| Semiconductor Org Types  | CC BY-NC 4.0 |
| Systematic Review Inclusion  | CC BY 4.0 |
| TAI Safety Research  | CC BY-SA 4.0 |
| Terms Of Service | Unlicensed |
| Tweet Eval Hate  | Unlicensed |
| Twitter Complaints  | Unlicensed |



### Citation Information

[More Information Needed]

### Contributions

Thanks to [@neel-alex](https://github.com/neel-alex), [@uvafan](https://github.com/uvafan), and [@lewtun](https://github.com/lewtun) for adding this dataset.