File size: 3,974 Bytes
6b6741c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b324db
6b6741c
 
 
 
 
5b324db
dfdf598
 
 
 
 
5b324db
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
---
configs:
- config_name: default
  data_files:
  - path: train/*.arrow
    split: train
task_categories:
- text-generation
language:
- en
size_categories:
- 1M<n<10M
pretty_name: conditional task generation with attributes
---

# Dataset Card for ctga-v1

## Dataset Details

`ctga-v1` or conditional task generation with attributes is a new dataset created by remixing existing instruction tuning datasets ([P3](https://github.com/bigscience-workshop/promptsource)) to train [Bonito](https://huggingface.co/BatsResearch/bonito-v1).

```python3
from datasets import load_dataset
dataset = load_dataset("BatsResearch/ctga-v1")
```

### Dataset Description

- **Repository:** [Github Repo](https://github.com/BatsResearch/bonito)
- **Paper:** [Arxiv](TODO)
- **Point of Contact:** [Nihal V. Nayak](mailto:[email protected])

## Dataset Creation

The dataset is derived from [P3](https://github.com/bigscience-workshop/promptsource) by annotating 323 prompt templates from 39 datasets with 16 task types.

The prompt templates in P3 are remixed to create the meta-templates, which, in turn, generate the training examples.

The meta-template input has a task type (<|tasktype|>) as an attribute followed by the unannotated text or context (<|context|>).

The output of the meta-template comprises the attributed task with the prompt or task description and the context ({context}) followed by a pipe symbol (<|pipe|>) and the solution to the task.

We use the <|pipe|> symbol to separate the instruction and response pair that is used for adapting the downstream model.


### Data Instances

Each data instance contains the following features: _context_, _task_input_ _task_output_ _dataset_ _dataset_config_ _task_type_ _input_ and _output_.

The (_input_, _output_) is the pair we used to train Bonito model.


### Data Fields

- 'context': input context
- 'task_input': prompted input without context
- 'task_output': corrosponding output
- 'dataset': source dataset
- 'dataset_config': source dataset configuration
- 'task_type': corrsponding task type
- 'input': reformatted input
- 'output': reformatted output


### Source Data

All the datasets are sourced from the datasets library.

- Extractive Question Answering & Question Generation
  - adversarial_qa/dbert
  - adversarial_qa/dbidaf
  - adversarial_qa/droberta
  - duorc/ParaphraseRC
  - duorc/SelfRC
  - squad

- Topic Classification
  - ag_news
  - dbpedia_14
  - hellaswag
  - duorc/ParaphraseRC
  - duorc/SelfRC
  - squad

- Sentiment Analysis
  - amazon_polarity
  - imdb
  - rotten_tomatoes
  - yelp_review_full

- Natural Language Inference
  - anli
  - super_glue/cb

- Multiple-Choice Question Answering
  - app_reviews
  - cosmos_qa
  - dream
  - qasc
  - quail
  - quartz
  - race/all
  - social_i_qa
  - super_glue/boolq
  - super_glue/record
  - wiki_hop/original

- Text Generation
  - app_reviews
  - cnn_dailymail/3.0.0
  - dream
  - duorc/ParaphraseRC
  - duorc/SelfRC
  - gigaword
  - samsum

- Summarization
  - cnn_dailymail/3.0.0
  - duorc/ParaphraseRC
  - duorc/SelfRC
  - gigaword
  - multi_newspaws/labeled_final
  - samsum
  - xsum

- Paraphrase Generation & Identification
  - glue/mrpc
  - multi_newspaws/labeled_final

- Yes-No Question Answering
  - race/all
  - social_i_qa
  - super_glue/boolq

- Sentence Completion
  - hellaswag
  - super_glue/copa

- Textual Entailment
  - super_glue/rte

- Word Sense Disambiguation
  - super_glue/wic

- Coreference Resolution
  - super_glue/wsc.fixed


## Citation

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

```
@inproceedings{bonito:aclfindings24,
  title = {Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation},
  author = {Nayak, Nihal V. and Nan, Yiyang and Trost, Avi and Bach, Stephen H.},
  booktitle = {Findings of the Association for Computational Linguistics: ACL 2024},
  year = {2024}}
```