File size: 4,893 Bytes
85c8cc6
 
de15a40
 
 
 
 
55b5bde
 
 
3979779
 
 
85c8cc6
de15a40
 
 
 
 
3979779
 
de15a40
 
 
 
 
 
3979779
55b5bde
de15a40
 
 
3979779
 
 
de15a40
 
 
 
3979779
de15a40
 
 
 
 
3979779
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
de15a40
 
 
 
 
3979779
de15a40
 
 
01894a3
de15a40
 
 
 
 
3979779
 
 
de15a40
 
 
 
 
01894a3
 
55b5bde
 
de15a40
01894a3
 
 
 
 
 
 
 
 
 
de15a40
 
3979779
de15a40
 
 
 
 
3979779
de15a40
 
 
3979779
de15a40
 
 
55b5bde
3979779
de15a40
 
 
 
 
3979779
 
de15a40
 
 
55b5bde
83a4786
de15a40
 
 
3979779
 
de15a40
 
 
 
 
55b5bde
de15a40
 
 
55b5bde
de15a40
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
---
license: cc-by-sa-3.0
language:
- en
tags:
- wiki
- training
task_categories:
- text-classification
- text-generation
pretty_name: Fandom23K Wikis
size_categories:
- 10M<n<100M
---

# Dataset Card for Dataset Name

## Dataset Description

- **Homepage: (TODO) https://docs.ryokoai.com/docs/training/dataset#Fandom22K** 
- **Repository: (TODO) https://github.com/RyokoAI/BigKnow2022** 
- **Paper: N/A** 
- **Leaderboard: N/A** 
- **Point of Contact: Ronsor Labs <[email protected]>** 

### Dataset Summary

Fandom23K is a dataset composed of 15,616,749 articles scraped from approximately 23,665 Fandom.com wikis between March 14 and March 18, 2023.
It is a subset of the upcoming BigKnow2022 dataset.

### Supported Tasks and Leaderboards

This dataset is primarily intended for unsupervised training of text generation models; however, it may be useful for other purposes.

* text-classification

### Languages

* English
* Potentially other languages in much smaller quantities.

## Dataset Structure

### Data Instances

```json
{
  "tag": "fandom.wikia2011",
  "text": "# Add Your Wiki's Highlights\n\nWrite the text of your article here!-_-\n\n",
  "title": "Add Your Wiki's Highlights"
}
{
  "tag": "fandom.wikia2011",
  "text": "# Add Your Wiki's Highlights!\n\nWikia wants to hear from you! What significant milestones did your wiki experience in 2011? What cool things did the community try out?\nCreate a page for the wiki you're most active on! Be sure to add it to the Entertainment, Gaming, or Lifestyle categories so it shows up in the right place!\n\n",
  "title": "Add Your Wiki's Highlights!"
}
{
  "tag": "fandom.wikia2011",
  "text": "# Assassins Creed Wiki 2011\n\nIn 2011, Assassin's Creed Wiki tested new Wikia features such as Message Wall, Chat, and New Layouts.\n\n",
  "title": "Assassins Creed Wiki 2011"
}
```

### Data Fields

* **text**: the actual article text
* **title**: the article title
* **tag**: text source tag, in the following format: fandom.*<wiki name>*

### Data Splits

No splitting of the data was performed.

## Dataset Creation

### Curation Rationale

Fandom23K provides an up-to-date corpus containing pop culture and media information spanning a variety of interests and
hobbies. Previous datasets containing such information are either part of a large and harder-to-handle whole, such as
Common Crawl, do not provide enough variety, or are simply outdated.

### Source Data

#### Initial Data Collection and Normalization

*More information about any referenced scripts, commands, or programs used may be found in the BigKnow2022 GitHub repository.*

First, a list of active Fandom wikis was gathered into a text file. Active is defined as "having at least 250 images on the wiki."
This list was gathered in early January 2023, despite the actual wiki content being more recent.

Second, the `scrape_fandom.py` script was used to generate and download an up to date dump for each of the wikis.

Third, `wikiextractor` was used to process these dumps into single XML files containing each article stripped of all formatting
besides links.

Fourth, `dump2jsonl` was used to convert the XML files into JSONL files with an article per line. Light markdown formatting was
applied, converting the HTML links to markdown-formatted links, and automatically making the article's title a header.

Finally, the JSONL files were concatenated into the Fandom23K dataset.

#### Who are the source language producers?

The contributors of each wiki.

### Annotations

#### Annotation process

Wiki names and article titles were collected alongside the article text. Other than that automated process, no annotation was performed.

#### Who are the annotators?

There were no human annotators.

### Personal and Sensitive Information

The dataset was collected from public wiki data. As a result, we do not believe
it should contain any PII and did not inspect it further.

## Considerations for Using the Data

### Social Impact of Dataset

This dataset is intended to be useful for anyone who wishes to train a model to generate "more entertaining" content requiring
knowledge of popular culture or a particular niche.

### Discussion of Biases

This dataset contains text from random Internet users and generally should not be used as an authoritative source of information.
Additionally, this dataset was not filtered at all. We recommmend its usage for research purposes only.

### Other Known Limitations

This dataset is based on a list of active wikis from January 2023, even though the actual wiki content may be more recent. Additionally,
smaller yet still active wikis may have been excluded.

## Additional Information

### Dataset Curators

Ronsor Labs

### Licensing Information

CC-BY-SA 3.0, except for any portions which state otherwise.

### Citation Information

[More Information Needed]

### Contributions

[More Information Needed]