|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- en |
|
tags: |
|
- Anime |
|
- Manga |
|
- Fandom |
|
- Wiki |
|
- Image |
|
size_categories: |
|
- 100K<n<1M |
|
--- |
|
|
|
# Anime Manga Characters Dataset |
|
|
|
This dataset is a metafile containing information about 247,034 anime and manga characters sourced from 2,372 fandom wiki sites. Each entry represents a character along with its associated metadata. The dataset has been deduplicated based on the `url` field to avoid redundancy, although a single character may still have multiple associated URLs. |
|
|
|
## Potential Applications |
|
|
|
- **Multimodal Data Creation**: Use the URLs to download the respective wiki pages and images, and construct multimodal datasets (e.g., interleaved image-text datasets for machine learning tasks). |
|
- **Document Retrieval**: Leverage the dataset to retrieve relevant documents or create search and retrieval models. |
|
- **Image Captioning**: Pair images of characters with textual metadata for image captioning tasks. |
|
- **Text-to-Image Generation**: Generate detailed images of characters based on descriptive metadata. |
|
- **Data Analysis**: Perform data analysis on anime and manga characters. |
|
|
|
## Dataset Construction |
|
|
|
The reference list of anime and manga is sourced from the [Animanga Fandom Wiki](https://animanga.fandom.com/wiki/List_of_Anime_and_Manga_Wikia). |
|
|
|
The dataset includes the following meta fields: |
|
|
|
- **`title`**: The name of the character, corresponding to the title of the fandom wiki page. |
|
- **`site_name`**: The title of the anime or manga, representing the source site name of the fandom wiki. |
|
- **`url`**: The URL of the character's wiki page. |
|
- **`description`**: A brief summary of the character's wiki page (truncated, not the full content). |
|
- **`image`**: The URL of a representative image associated with the character. |
|
|
|
The pseudo code for getting metadata from each fandom wiki html: |
|
|
|
```python |
|
def get_metadata(html): |
|
soup = BeautifulSoup(html, 'lxml') |
|
metadata = {"title": "", "site_name": "", "url": "", "description": "", "image": ""} |
|
if meta := soup.find("meta", {"property": "og:site_name"}): |
|
metadata["site_name"] = meta["content"] |
|
if meta := soup.find("meta", {"property": "og:title"}): |
|
metadata["title"] = meta["content"] |
|
if meta := soup.find("meta", {"property": "og:url"}): |
|
metadata["url"] = meta["content"] |
|
if meta := soup.find("meta", {"property": "og:description"}): |
|
metadata["description"] = meta["content"] |
|
if meta := soup.find("meta", {"property": "og:image"}): |
|
metadata["image"] = meta["content"] |
|
return metadata |
|
``` |
|
|
|
The dataset has been deduplicated based on the `url` field to avoid redundancy. |
|
|
|
## Acknowledgments |
|
|
|
This dataset owes its existence to the passionate contributions of anime and manga fans worldwide who curate and maintain the fandom wikis. |