Update README.md
Browse files
README.md
CHANGED
@@ -34,25 +34,25 @@ Furthermore, we excluded data that contained tags that warned of various rights
|
|
34 |
This dataset building is conducted in conjunction with human validation.
|
35 |
We conducted approx random 1% human validation of filtered dataset and if any questionable data is found, add word to our database, and then filtered again.
|
36 |
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
|
57 |
# Limitation and Biases
|
58 |
- Public domain images would contain biased and toxic content, such as stereotypes about certain minoritized groups. We tried to remove this by manual word filtering.
|
|
|
34 |
This dataset building is conducted in conjunction with human validation.
|
35 |
We conducted approx random 1% human validation of filtered dataset and if any questionable data is found, add word to our database, and then filtered again.
|
36 |
|
37 |
+
# How we curate this dataset
|
38 |
+
## Problem statement
|
39 |
+
- Our goal to build this dataset is to achieve both quality and copyright/privacy safety.
|
40 |
+
- 1. Creating rights-cleared and safe-to-use dataset from an uncurated and noisy data source.
|
41 |
+
2. Creating diversified and balanced dataset from an uncurated and noisy data source.
|
42 |
+
## Dataset curation
|
43 |
+
- 1. We used category tags to limit the data to safe use, and then conducted word based filtering.
|
44 |
+
- For public domain data, we used following categories only: `CC-PD-Mark, PD-self, PD-user, PD-author, PD-link, PD-old-70, PD-old-80, PD-old-90, PD-old-100`
|
45 |
+
- Images with these tags are removed even if they are tagged as public domain: `Images with watermarks, PD-algorithm, ~AI-generated works, With trademark, Unidentified logos, License review needed, Deletion requests, Flickr images~, Personality rights warining, Cosplay, Media from YouTube` (XXXX=Year)
|
46 |
+
- This means we solely use public domain data whose copyright is expired globally (US, EU and Japan) or waived directly by authors, without using AI generated contents.
|
47 |
+
- To address copyright laundering concerns, we also do not use any data sourced from Flickr. See: [Flickr Washing](https://commons.wikimedia.org/wiki/Commons:Problematic_sources#Flickr_washing:_is_the_work_original_with_the_uploader,_or_a_copyright_violation?)
|
48 |
+
- After category tag based filtering, we conducted word based filtering described above for mitigating possible rights infringing or harmful data.
|
49 |
+
- Actual photographs including recognizable human faces are removed from this dataset by using our internal human face detector, to maximize privacy safety.
|
50 |
+
- 2. We also improved the quality of our dataset by doing the following without using a pretrained model
|
51 |
+
- Image deduplication is conducted by using simple imagehash algorithm.
|
52 |
+
- To build diversified dataset with limited datasources, we use [WordNet](https://wordnet.princeton.edu/), and word count based balancing method introduced in the original [CLIP paper](https://arxiv.org/abs/2103.00020) and the research paper by [Hu Xu et al, "Demystifying CLIP Data"](https://arxiv.org/abs/2309.16671)
|
53 |
+
- Princeton University "About WordNet." [WordNet](https://wordnet.princeton.edu/). Princeton University. 2010.
|
54 |
+
- To improve caption accuracy, we performed a Commons API query on the words in WordNet and sorted them by relevance to add additional captions by query words.
|
55 |
+
- Also we conducted machine translation of captions between Japanese and English using [our ElanMT model](https://huggingface.co/Mitsua/elan-mt-bt-en-ja) which is trained exclusively on openly licensed corpus.
|
56 |
|
57 |
# Limitation and Biases
|
58 |
- Public domain images would contain biased and toxic content, such as stereotypes about certain minoritized groups. We tried to remove this by manual word filtering.
|