|
--- |
|
annotations_creators: |
|
- no-annotation |
|
language_creators: |
|
- crowdsourced |
|
license: |
|
- cc-by-sa-4.0 |
|
task_categories: |
|
- text-generation |
|
- fill-mask |
|
task_ids: |
|
- language-modeling |
|
- masked-language-modeling |
|
source_datasets: |
|
- original |
|
language: |
|
- en |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: "data/Fandom-v0.5.jsonl" |
|
- config_name: raw-pre-roblox |
|
data_files: |
|
- split: train |
|
path: "v2.5-chunks/*.jsonl" |
|
- config_name: raw-post-roblox |
|
data_files: |
|
- split: train |
|
path: "v2.5-chunks-roblox-filter/*.jsonl" |
|
pretty_name: Fanatic Fandom |
|
--- |
|
|
|
# Dataset Card for Fanatic Fandom |
|
|
|
![](FandomWaifu.png "SD-Generated image styled in the same essence with fandom's logo") |
|
|
|
*Waifu to catch your attention.* |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
*Fanatic Fandom* is a cleaned dataset of a raw scrape of fandom wikis. We crawled all the publicly available wikis and crawled each page. |
|
Filtering to a total amount of tokens of **~7.43B** (llama-2-7b-chat-tokenizer) / **~6.27B** (RWKV Tokenizer) from primarily English language. |
|
|
|
- **Curated by:** KaraKaraWitch |
|
- **Funded by:** Recursal.ai (I work there lol) |
|
- **Shared by:** KaraKaraWitch |
|
- **Language(s) (NLP):** Primarily English |
|
- **License:** cc-by-sa-4.0 |
|
|
|
### Dataset Sources |
|
|
|
- **Source Data:** [https://fandom.com/](https://fandom.com/) (Bot Crawled.) |
|
|
|
### Processing and Filtering |
|
|
|
We detail the following steps involved in scraping, indexing and cleaning fandom wikis to the html content files. Here's a breakdown of the process: |
|
|
|
1. **Wiki Identification:** |
|
- `WikisIndexer.py` script retrieves a list of wikis from `https://community.fandom.com/Special:NewWikis`. |
|
|
|
2. **Page Indexing:** |
|
- `IndexFandomPages.py` script utilizes the MediaWiki API (`api.php`) to gather a list of pages per each wiki. |
|
|
|
3. **Page Fetching:** |
|
- `WikiPageFetcher.py` script utilizes the MediaWiki API (`api.php`) to render the render the wiki page and save it to a large JSONL file. |
|
- Additionally, any wikis with less than 5 pages are not scrapped due to assumed low-quality. |
|
|
|
4. **Data Chunking:** |
|
- A single large JSONL file containing all fetched pages is split into smaller, more manageable chunks. |
|
- This is in preparation from the 4th step. |
|
|
|
5. **Roblox Wiki Removal:** |
|
- The `RobloxWikiFilter.py` script identifies and removes Roblox wikis due to the high volume of low-quality content they often generate. This filtering step simplifies the subsequent stub article removal process. |
|
- From quick napkin math: around 15.2% (Comparing Step 3 and Step 4 results) of fandom wikis are Roblox data. |
|
|
|
6. **Content Transformation:** |
|
- HTML content is converted to Markdown format. The conversion process removes unnecessary elements like figures, stub article notices, and other irrelevant data. |
|
|
|
**Note:** Due to the passage of time (approximately 3 months as of May 6, 2024), the specific details of the crawling process may be a little hazy. The primary challenge encountered was the significant time required to complete the crawling operation. |
|
|
|
### Data Splits |
|
|
|
There are 3 splits for this dataset: |
|
|
|
- final |
|
- Contains the final 25GB jsonl file. |
|
- You probably want this for training. |
|
- raw-pre-roblox |
|
- Raw files, **before** Roblox filtering. |
|
- Use this if you want to start from scratch and don't want to crawl fandom again. |
|
- raw-post-roblox |
|
- Raw files, **after** Roblox filtering. |
|
- Roblox wikis removed. |
|
- Use this if you want to start from scratch and don't want to crawl fandom again. |
|
|
|
### Data Keys |
|
|
|
For this dataset, we have included most of the various steps for the dataset. They are listed as such below: |
|
|
|
- `fandom_wikis_210224.csv` |
|
- A CSV file containing a list of wikis found when scrapping from `Special:NewWikis` on 21/02/2024 |
|
- The key is as follows: `Sub Domain,Name of Wiki,Path name,0` |
|
- The stray zero can be ignored as it does not serve any purpose. |
|
- `fandom_wikis_pages_210224_v2.jsonl` |
|
- Contains a jsonl list of wiki pages per each wiki. |
|
- Each jsonl has the following keys: |
|
- domain: str [The subdomain.] |
|
- path: str [Path to `api.php`. Which can be different for different languages] |
|
- pages: list[str] [A list of strings containing page names] |
|
- `v2.5-chunks` [folder] |
|
- Contains all the pages fetched from the list in `fandom_wikis_pages_210224_v2.jsonl` |
|
- The original file it was from is `fandom_wikis_pages_contents_210224_v2.jsonl`, which is 283.44GB in size and can't be uploaded to HF. |
|
- Each jsonl has the following keys: |
|
- domain: str [The subdomain.] |
|
- path: str [Path to `api.php`. Which can be different for different languages] |
|
- pages: str [Page name] |
|
- content: raw response from api.php |
|
- `v2.5-chunks-roblox-filter` [folder] |
|
- Contains files after roblox has been filtered. |
|
- Each jsonl has the following keys: |
|
- domain: str [The subdomain.] |
|
- path: str [Path to `api.php`. Which can be different for different languages] |
|
- pages: str [Page name] |
|
- content: raw response from api.php |
|
- `fandom-v0.5.jsonl` [file] |
|
- Jsonl file containing the fully processed text. |
|
- Each jsonl has the following keys: |
|
- text: str [The text content.] |
|
- meta: dict[str,str] [dictionary of metadata] |
|
- title: str [The page/name] |
|
- domain: str [The subdomain.] |
|
- cats: str [Categories. Extracted and unused.] |
|
- removed: list[str] [A list of removed stubs / html content] |
|
|
|
- `roblox.domains.txt` [Extras] |
|
- A txt list of Roblox domains. |
|
|
|
## Recursal's Vision |
|
|
|
> To make AI accessible to everyone, regardless of language, or economical status |
|
|
|
This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it. |
|
|
|
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english. |
|
|
|
### About RWKV |
|
|
|
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision. |
|
|
|
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture. |
|
|
|
You can find out more about the project, and latest models, at the following |
|
|
|
- [https://blog.rwkv.com](https://blog.rwkv.com) |
|
- [https://wiki.rwkv.com](https://wiki.rwkv.com) |
|
|
|
|
|
### About Recursal AI |
|
|
|
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings. |
|
|
|
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets. |
|
|
|
The following dataset/models provided here, is part of that commitment. |
|
|
|
You can find out more about recursal AI here |
|
|
|
- [https://recursal.ai](https://recursal.ai) |
|
- [https://blog.recursal.ai](https://blog.recursal.ai) |
|
|
|
### Dataset Curators |
|
|
|
KaraKaraWitch. (I typically hangout in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.) |
|
|
|
I'd be happy if you could spread the word and recommend this dataset for your use cases `:)` |
|
|
|
### Licensing Information |
|
|
|
Most of all fandom user-created content are licensed under CC-BY-SA unless otherwise noted. By that assumption, we did not include any figures or images as they typically are not licensed under the CC-BY-SA license. |
|
|
|
Recursal Waifus (The banner image) are licensed under CC-BY-SA. |
|
They do not represent the related websites in any official capacity unless otherwise or announced by the website. |
|
You may use them as a banner image. However, you must always link back to the dataset. |
|
|
|
### Citation Information |
|
|
|
``` |
|
@ONLINE{fantaticfandom, |
|
title = {Fanatic Fandom}, |
|
author = {KaraKaraWitch, recursal.ai}, |
|
year = {2024}, |
|
howpublished = {\url{https://huggingface.co/datasets/recursal/Fanatic-Fandom}}, |
|
} |
|
``` |
|
|
|
### Special Thanks |
|
|
|
- [undeleted](https://huggingface.co/undeleted) from RyokoAI for providing initial scripts to base stuff on. |
|
I eventually decided to write my own scraper while taking inspiration from their code. |