Datasets:

Tasks:
Other
Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
VictorSanh commited on
Commit
f52f478
1 Parent(s): bbad117

update script and data card

Browse files
Files changed (2) hide show
  1. P3.py +2 -4
  2. README.md +2 -6
P3.py CHANGED
@@ -12,7 +12,7 @@
12
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
  # See the License for the specific language governing permissions and
14
  # limitations under the License.
15
- """P3"""
16
 
17
 
18
  import datasets
@@ -27,7 +27,7 @@ _CITATION = """\
27
  TODO"""
28
 
29
  _DESCRIPTION = """\
30
- P3 is a collection of prompted English datasets covering a diverse set of NLP tasks. A prompt is the combination of an input template and a target template. The templates are functions mapping a data example into natural language for the input and target sequences. For example, in the case of an NLI dataset, the data example would include fields for *Premise, Hypothesis, Label*. An input template would be *If {Premise} is true, is it also true that {Hypothesis}?*, whereas a target template can be defined with the label choices *Choices[label]*. Here *Choices* is prompt-specific metadata that consists of the options *yes, maybe, no* corresponding to *label* being entailment (0), neutral (1) or contradiction (2).
31
 
32
  Prompts are collected using [Promptsource](https://github.com/bigscience-workshop/promptsource), an interface to interactively write prompts on datasets, and collect prompt-specific metadata such as evaluation metrics. As of October 13th, there are 2'000 prompts collected for 270+ data(sub)sets. The collection of prompts is publicly available on [Promptsource](https://github.com/bigscience-workshop/promptsource).
33
 
@@ -83,8 +83,6 @@ def find_task_splits_and_features():
83
  """Find the available tasks under ./data and their available splits and features."""
84
  task_and_their_splits = defaultdict(dict)
85
  for stats in glob.glob(f"{_DATA_PATH}/*/stats.*.json"):
86
- if "anli" not in stats:
87
- continue
88
  folder_path = os.path.dirname(stats)
89
  task_name = folder_path.split("/")[-1]
90
  split_name = os.path.basename(stats).split(".")[1]
 
12
  # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
  # See the License for the specific language governing permissions and
14
  # limitations under the License.
15
+ """P3 (Public Pool of Prompts)"""
16
 
17
 
18
  import datasets
 
27
  TODO"""
28
 
29
  _DESCRIPTION = """\
30
+ P3 (Pubic Pool of Prompts)is a collection of prompted English datasets covering a diverse set of NLP tasks. A prompt is the combination of an input template and a target template. The templates are functions mapping a data example into natural language for the input and target sequences. For example, in the case of an NLI dataset, the data example would include fields for *Premise, Hypothesis, Label*. An input template would be *If {Premise} is true, is it also true that {Hypothesis}?*, whereas a target template can be defined with the label choices *Choices[label]*. Here *Choices* is prompt-specific metadata that consists of the options *yes, maybe, no* corresponding to *label* being entailment (0), neutral (1) or contradiction (2).
31
 
32
  Prompts are collected using [Promptsource](https://github.com/bigscience-workshop/promptsource), an interface to interactively write prompts on datasets, and collect prompt-specific metadata such as evaluation metrics. As of October 13th, there are 2'000 prompts collected for 270+ data(sub)sets. The collection of prompts is publicly available on [Promptsource](https://github.com/bigscience-workshop/promptsource).
33
 
 
83
  """Find the available tasks under ./data and their available splits and features."""
84
  task_and_their_splits = defaultdict(dict)
85
  for stats in glob.glob(f"{_DATA_PATH}/*/stats.*.json"):
 
 
86
  folder_path = os.path.dirname(stats)
87
  task_name = folder_path.split("/")[-1]
88
  split_name = os.path.basename(stats).split(".")[1]
README.md CHANGED
@@ -32,13 +32,7 @@ task_categories:
32
  - [Curation Rationale](#curation-rationale)
33
  - [Source Data](#source-data)
34
  - [Annotations](#annotations)
35
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
36
- - [Considerations for Using the Data](#considerations-for-using-the-data)
37
- - [Social Impact of Dataset](#social-impact-of-dataset)
38
- - [Discussion of Biases](#discussion-of-biases)
39
- - [Other Known Limitations](#other-known-limitations)
40
  - [Additional Information](#additional-information)
41
- - [Dataset Curators](#dataset-curators)
42
  - [Licensing Information](#licensing-information)
43
  - [Citation Information](#citation-information)
44
  - [Contributions](#contributions)
@@ -183,6 +177,8 @@ The main annotation guideline was that prompts needed to be grammatical and unde
183
 
184
  The full annotation given to the contributors can be found [here](https://github.com/bigscience-workshop/promptsource/blob/main/CONTRIBUTING.md). *Note to self: the link is currently being updated with the)
185
 
 
 
186
  ### Licensing Information
187
 
188
  The dataset is released under Apache 2.0.
 
32
  - [Curation Rationale](#curation-rationale)
33
  - [Source Data](#source-data)
34
  - [Annotations](#annotations)
 
 
 
 
 
35
  - [Additional Information](#additional-information)
 
36
  - [Licensing Information](#licensing-information)
37
  - [Citation Information](#citation-information)
38
  - [Contributions](#contributions)
 
177
 
178
  The full annotation given to the contributors can be found [here](https://github.com/bigscience-workshop/promptsource/blob/main/CONTRIBUTING.md). *Note to self: the link is currently being updated with the)
179
 
180
+ ## Additional Information
181
+
182
  ### Licensing Information
183
 
184
  The dataset is released under Apache 2.0.