Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
orionweller commited on
Commit
bafb500
1 Parent(s): cafdc93

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - summarization
5
+ - information-retrieval
6
+ - text-generation
7
+ - text2text-generation
8
+ language:
9
+ - en
10
+
11
+ pretty_name: MegaWika-Report-Generation
12
+ ---
13
+ # Dataset Card for MegaWika for Report Generation
14
+
15
+ ## Dataset Description
16
+
17
+ - **Homepage:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
18
+ - **Repository:** [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika)
19
+ - **Paper:** [link](https://arxiv.org/pdf/2307.07049.pdf)
20
+ - **Point of Contact:** [Samuel Barham]([email protected])
21
+
22
+ ### Dataset Summary
23
+
24
+ MegaWika is a multi- and crosslingual text dataset containing 30 million Wikipedia passages with their scraped and cleaned web citations. The passages span
25
+ 50 Wikipedias in 50 languages, and the articles in which the passages were originally embedded are included for convenience. Where a Wikipedia passage is in a
26
+ non-English language, an automated English translation is provided.
27
+
28
+ This dataset provides the data for report generation / multi-document summarization with information retrieval.
29
+
30
+
31
+ ### Dataset Creation
32
+
33
+ See the original [HuggingFace](https://huggingface.co/datasets/hltcoe/megawika) repo.
34
+
35
+
36
+ ### Languages
37
+
38
+ MegaWika is divided by Wikipedia language. There are 50 languages, including English, each designated by their 2-character ISO language code. Currently only a few languages are available, but more are coming!
39
+
40
+
41
+ ## Dataset Structure
42
+
43
+ The dataset is divided into two main sections, "mono" for monolingual report generation and "cross-lingual" for cross-lingual report generation.
44
+ t is further subdivided into two options, (1) generating the entire Wikipedia sections from multiple citations ("all") or (2) generating segments of each section in an iterative fashion ("iterative").
45
+ Then the dataset is divided by language pairs.
46
+
47
+ ### Data Instances
48
+
49
+ Given the rest of the fields (except for the ID) the goals is to produce the `gold_section_text` (e.g. given the title, intro, section name, and citations).
50
+ `num_docs` is provided for filtering on the number of docs for the multi-doc summarization. Note that in the iterative setting is it just one citation.
51
+
52
+ ### Data Fields
53
+
54
+ The detailed structure of an instance is as follows:
55
+ ```
56
+ {
57
+ "id": <string : a unique id for the instance>
58
+ "num_docs": <int : the number of citations for this instance>
59
+ "title": <string : title of original Wikipedia article>
60
+ "intro": <string : text of the Wikipedia article's introduction>
61
+ "section_name": <string : the name of the section to generate>
62
+ "previous_text": <string : used for the iterative task format, the previous text in the section already to condition on>
63
+ "question": <string : a natural language question that could be used for query-focused summarization, generated by ChatGPT>
64
+ "gold_section_text": <string : the text of the original Wikipedia section, e.g. the gold label for summarization>
65
+ "citations": <list of strings : the text of the citations (e.g. reference) for the section/chunk >
66
+ }
67
+ ```
68
+
69
+ ## Licensing and Takedown
70
+
71
+ MegaWika 1.0 consists in part of documents scraped from across the web (based on citations linked in Wikipedia articles.)
72
+
73
+ We do not own any of the scraped text nor do we claim copyright: text drawn from Wikipedia citations are meant for research use in algorithmic design and model training.
74
+
75
+ We release this dataset and all its contents under CC-BY-SA-4.0.
76
+
77
+ ### Notice and Takedown Policy:
78
+ *NB*: Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
79
+
80
+ - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
81
+ - Clearly identify the copyrighted work claimed to be infringed.
82
+ - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
83
+
84
+ And contact the authors.
85
+
86
+ *Take down*: We will comply to legitimate requests by removing the affected sources from the next release of the dataset.
87
+
88
+ ## Additional Information
89
+
90
+ ### Dataset Curators
91
+
92
+ Released and maintained by the Johns Hopkins University Human Language Technology Center of Excellence (JHU/HLTCOE).
93
+ You can contact one the MegaWika authors, including [Samuel Barham](mailto:[email protected]), [Orion Weller](mailto:[email protected]),
94
+ and [Ben van Durme](mailto:[email protected]) with questions.
95
+
96
+ ### Licensing Information
97
+
98
+ Released under the [Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/) license.
99
+
100
+ ### Citation Information
101
+
102
+ ```
103
+ @misc{barham2023megawika,
104
+ title={MegaWika: Millions of reports and their sources across 50 diverse languages},
105
+ author={Samuel Barham and and Weller and Michelle Yuan and Kenton Murray and Mahsa Yarmohammadi and Zhengping Jiang and Siddharth Vashishtha and Alexander Martin and Anqi Liu and Aaron Steven White and Jordan Boyd-Graber and Benjamin Van Durme},
106
+ year={2023},
107
+ eprint={2307.07049},
108
+ archivePrefix={arXiv},
109
+ primaryClass={cs.CL}
110
+ }
111
+ ```