license: cc-by-sa-4.0
task_categories:
- text-to-image
- image-to-text
language:
- en
- ja
tags:
- legal
pretty_name: Safe Commons PD 3M
size_categories:
- 1M<n<10M
Safe Commons PD 3M
- This is a balanced and safe-to-use public domain / CC0 images dataset.
- All images and texts come from Wikimedia Commons and Wikidata with strict filtering.
- Images license is either Public Domain or CC0 (varies by image).
- Texts license is either CC0 or CC BY-SA (varies by caption source).
- No synthetic data (AI generated images or captions) is in the dataset.
To build this dataset, we tried to avoid any knowledge leaks from existing pre-trained models at the highest level. Therefore, we do not use AI-generated captions, aesthetic scoring, or CLIP score filtering, which are all common in other large-scale Public Domain or CC licensed image/text dataset building.
Instead, to ensure the highest level safety, we leveraged Wikidata and built 146,041 words database which contains artist names, celebrity names, fictional character names, trademarks and bad words, based on Wikidata licensed under CC0. To filtering the dataset, in conjunction with these word level filtering, we did limit the category which is safe-to-use and did iteratively small amount of manual visual check.
Data sources
Wikimedia Commons allows anyone to contribute and is a relatively insecure data source. (Although it's way more secure than Flickr, because Wikimedia Commons has community governance and basically copyright infringement materials will be deleted. Wikimedia Commons also includes many reprinted images from Flickr, but these reprinted images are excluded using category tags. We strive to limit use to CC0 or public domain data sources that are clearly expired or waived directly by authors.) Therefore, we first limited the use of Wikimedia Commons data to public domain CC0 data that can be safely used using category tag information. Furthermore, we excluded data that contained tags that warned of various rights violations or contained AI-generated tags. This dataset building is conducted in conjunction with human validation. We conducted approx random 1% human validation of filtered dataset and if any questionable data is found, add word to our database, and then filtered again.
How we curate this dataset
Problem statement
- Our goal to build this dataset is to achieve both quality and copyright/privacy safety.
- Creating rights-cleared and safe-to-use dataset from an uncurated and noisy data source.
- Creating diversified and balanced dataset from an uncurated and noisy data source.
Dataset curation
- We used category tags to limit the data to safe use, and then conducted word based filtering.
- For public domain data, we used following categories only:
CC-PD-Mark, PD-self, PD-user, PD-author, PD-link, PD-old-70, PD-old-80, PD-old-90, PD-old-100
- Images with these tags are removed even if they are tagged as public domain:
Images with watermarks, PD-algorithm, ~AI-generated works, With trademark, Unidentified logos, License review needed, Deletion requests, Flickr images~, Personality rights warining, Cosplay, Media from YouTube
(XXXX=Year) - This means we solely use public domain data whose copyright is expired globally (US, EU and Japan) or waived directly by authors, without using AI generated contents.
- To address copyright laundering concerns, we also do not use any data sourced from Flickr. See: Flickr Washing
- After category tag based filtering, we conducted word based filtering described above for mitigating possible rights infringing or harmful data.
- Actual photographs including recognizable human faces are removed from this dataset by using our internal human face detector, to maximize privacy safety.
- We also improved the quality of our dataset by doing the following without using a pretrained model
- Image deduplication is conducted by using simple imagehash algorithm.
- To build diversified dataset with limited datasources, we use WordNet, and word count based balancing method introduced in the original CLIP paper and the research paper by Hu Xu et al, "Demystifying CLIP Data"
- Princeton University "About WordNet." WordNet. Princeton University. 2010.
- To improve caption accuracy, we performed a Commons API query on the words in WordNet and sorted them by relevance to add additional captions by query words.
- Also we conducted machine translation of captions between Japanese and English using our ElanMT model which is trained exclusively on openly licensed corpus.
Limitation and Biases
- Public domain images would contain biased and toxic content, such as stereotypes about certain minoritized groups. We tried to remove this by manual word filtering.
License
- The images are shared under CC0 or public domain by its authors.
- The compiled caption from Wikimedia Commons is licensed under CC BY-SA 4.0 originally by Wikimedia Commons, and modified part by ELAN MITSUA Project / Abstract Engine.
- The compiled caption from Wikidata is licensed under CC BY 4.0 by ELAN MITSUA Project / Abstract Engine.
- The dataset is licensed under CC BY-SA 4.0 by ELAN MITSUA Project / Abstract Engine.
- This means you can use, adapt and redistribute this as long as you give appropriate credit, indicate if changes were made, and distribute any adapted work under the same license.
Curated and developed by
- ELAN MITSUA Project / Abstract Engine.