|
--- |
|
task_categories: |
|
- image-to-text |
|
- text-generation |
|
language: |
|
- en |
|
license: |
|
- apache-2.0 |
|
multilinguality: |
|
- monolingual |
|
tags: |
|
- story |
|
- multimodal |
|
- nlg |
|
- generation |
|
- storytelling |
|
- multimodality |
|
- narrative |
|
- movie-shot |
|
paperswithcode_id: visual-writing-prompts |
|
pretty_name: vwp |
|
size_categories: |
|
- 10K<n<100K |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: "vwp_v2.1_train.csv" |
|
- split: val |
|
path: "vwp_v2.1_val.csv" |
|
- split: test |
|
path: "vwp_v2.1_test.csv" |
|
default: true |
|
--- |
|
|
|
# Dataset Card for **Visual Writing Prompts Dataset (VWP)** |
|
|
|
**[Website](https://vwprompt.github.io/)** | **[Github Repository](https://github.com/vwprompt/vwp)** | **[arXiv e-Print](https://arxiv.org/abs/2301.08571)** |
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
The Visual Writing Prompts (VWP) dataset contains almost 2K selected sequences of |
|
movie shots, each including 5-10 images. The image sequences are aligned with a total of 12K stories which are collected via crowdsourcing given the image sequences and up to 5 grounded characters from the corresponding image sequence. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Links |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
- **TACL 2023 Paper:** [Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences](https://doi.org/10.1162/tacl_a_00553) |
|
|
|
### Dataset Description |
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
The Visual Writing Prompts (VWP) dataset is designed to facilitate the development and testing of natural language processing models that generate stories based on sequences of images. This dataset comprises nearly 2,000 curated sequences of movie shots, each sequence containing between 5 to 10 images. These images are meticulously selected to ensure they depict coherent plots centered around one or more main characters, enhancing the visual narrative structure for story generation. Aligned with these image sequences are approximately 12,000 stories, which were written by crowd workers using Amazon Mechanical Turk. This setup aims to provide a rich, visually grounded storytelling context that helps models generate more coherent, diverse, and engaging stories. |
|
|
|
- **Curated by:** Xudong Hong, Asad Sayeed, Khushboo Mehra, Vera Demberg, Bernt Schiele |
|
- **Funded by:** See Acknowledgments in our paper |
|
- **Language(s) (NLP):** English |
|
- **License:** Apache License 2.0 |
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
The dataset is in a CSV file. The explanation of each column is in [this table](https://github.com/vwprompt/vwp/blob/main/column_explain.csv). |
|
|
|
## Uses |
|
|
|
<!-- Address questions about how the dataset is intended to be used. --> |
|
|
|
### Direct Use |
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
The dataset is intended for use in natural language processing tasks, particularly for the development and evaluation of models designed to generate coherent and visually grounded stories from sequences of images. |
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> |
|
|
|
The copyrights of all movie shots belong to the original copyright holders which can be found in the IMDb page of each movie. The IMDb page is indicated by the index in the `imdb_id` column. For example, for the first row of our data, the `imdb_id` is `tt0112573` so the corresponding imdb page is https://www.imdb.com/title/tt0112573/companycredits/. Do not violate the copyrights while using these images. The usage of these images is limited to academic purposes. |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
The dataset was curated to improve the quality of text stories generated from image sequences, focusing on visual storytelling with coherent plots and character grounding. |
|
|
|
### Source Data |
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
### Data Collection and Processing |
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
The source data consists of image sequences extracted from the movie shots from the MovieNet dataset (https://opendatalab.com/OpenDataLab/MovieNet/tree/main/raw), ensuring a coherent plot around one or more main characters. |
|
|
|
### Who are the source data producers? |
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
|
|
The images were initially produced by movie production companies and extracted by authors of MovieNet. The stories are written by crowd workers. Then the stories are compiled and refined by the authors. |
|
|
|
### Annotations |
|
|
|
<!-- If the dataset contains annotations that are not part of the initial data collection, use this section to describe them. --> |
|
|
|
### Annotation process |
|
|
|
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> |
|
|
|
Crowdworkers were asked to write stories that fit the provided image sequences. The annotation process included reviewing these stories for coherence, grammatical correctness, and alignment with the images. More details are in our paper. |
|
|
|
### Who are the annotators? |
|
|
|
<!-- This section describes the people or systems who created the annotations. --> |
|
|
|
The annotators were five graduate students from Saarland University. Two are native English speakers. The other three are proficient in English. |
|
|
|
### Personal and Sensitive Information |
|
|
|
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> |
|
|
|
We do not collect personal or sensitive information. Personal information like worker IDs are not released. Our anonymization process is described in our paper. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
The stories in this dataset are in English only. Although we have tried our best to filter the images and review the stories, it is not possible to go through all the stories. There could still be biased or harmful content. Please use the dataset carefully. |
|
|
|
## Citation |
|
|
|
Xudong Hong, Asad Sayeed, Khushboo Mehra, Vera Demberg, and Bernt Schiele. 2023. [Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences](https://aclanthology.org/2023.tacl-1.33). *Transactions of the Association for Computational Linguistics*, 11:565–581. |
|
|
|
**BibTeX:** |
|
|
|
```latex |
|
@article{10.1162/tacl_a_00553, |
|
author = {Hong, Xudong and Sayeed, Asad and Mehra, Khushboo and Demberg, Vera and Schiele, Bernt}, |
|
title = "{Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences}", |
|
journal = {Transactions of the Association for Computational Linguistics}, |
|
volume = {11}, |
|
pages = {565-581}, |
|
year = {2023}, |
|
month = {06}, |
|
issn = {2307-387X}, |
|
doi = {10.1162/tacl_a_00553}, |
|
url = {[https://doi.org/10.1162/tacl\\\\_a\\\\_00553](https://doi.org/10.1162/tacl%5C%5C%5C%5C_a%5C%5C%5C%5C_00553)}, |
|
eprint = {[https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl\\\\_a\\\\_00553/2134487/tacl\\\\_a\\\\_00553.pdf](https://direct.mit.edu/tacl/article-pdf/doi/10.1162/tacl%5C%5C%5C%5C_a%5C%5C%5C%5C_00553/2134487/tacl%5C%5C%5C%5C_a%5C%5C%5C%5C_00553.pdf)}, |
|
} |
|
``` |
|
|
|
## Dataset Card Authors |
|
|
|
Xudong Hong |
|
|
|
## Dataset Card Contact |
|
|
|
[[email protected]](mailto:[email protected]) |
|
|
|
# Disclaimer: |
|
|
|
All the images are extracted from the movie shots from the MovieNet dataset (https://opendatalab.com/OpenDataLab/MovieNet/tree/main/raw). The copyrights of all movie shots belong to the original copyright holders which can be found in the IMDb page of each movie. The IMDb page is indicated by the index in the `imdb_id` column. For example, for the first row of our data, the `imdb_id` is `tt0112573` so the corresponding imdb page is https://www.imdb.com/title/tt0112573/companycredits/. Do not violate the copyrights while using these images. We only use these images for academic purposes. Please contact the author if you have any questions. |
|
|