Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
VictorSanh commited on
Commit
9edf840
1 Parent(s): 6f833eb
Files changed (2) hide show
  1. README.md +43 -44
  2. assets/nomic_map.png +3 -0
README.md CHANGED
@@ -56,63 +56,60 @@ dataset_info:
56
  - **Paper: OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents**
57
  - **Point of Contact: [email protected]**
58
 
59
- ### Dataset Summary
60
 
61
- `OBELISC` is an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.
62
 
63
- This dataset can be used to train large multimodal models, significantly improving their reasoning abilities compared to models trained solely on image/text pairs. Please refer to our paper for further details about the construction of the dataset, quantitative and qualitative analyses of `OBELISC`, and experiments we conducted.
64
 
65
- ### Languages
66
-
67
- English
68
 
69
  ## Data Fields
70
 
71
- There are 4 fields: `images`, `texts`, `metadata` and `general_metadata`.
72
-
73
- For each example, the data in the columns `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`.
74
-
75
- For example, for the web document `<image_1>text<image_2>`, in `images`, we have `[image_1,None,image_2]` and in `texts` we have `[None,text,None]`.
76
-
77
- The images are replaced by their URLs, and the users have to download them themselves, for example with the library `img2dataset`.
78
-
79
- In `metadata`, there is a string that can be transformed into a list with `json.loads(example["metadata"])`. This list will have the same size as the lists of images and texts, and will have a dictionary for each index where there is an image, and a `None` value when there is a text. This dictionary will contain the metadata of the image (original source document, unformatted source, alt-text if present, ...).
80
-
81
- Finally, in `general_metadata`, there is a string that can be transformed into a dictionary, containing the URL of the document, and information about its location in the Common Crawl data.
82
-
83
- ## Data Splits
 
 
 
 
84
 
85
- There is only one split, `train`, that contains 141,047,697 examples.
86
 
87
- ## Size
88
 
89
- `OBELISC` with images replaced by their URLs weighs 666.6 GB (unwanted!) in arrow format and 377 GB in this uploaded `parquet` format.
90
 
91
- ## Configs
92
 
93
- The default config, downloaded when nothing is specified in the config argument, with
94
- ```
95
- from datasets import load_dataset
96
 
97
- ds = load_dataset("HuggingFaceM4/OBELISC")
98
- ```
99
- corresponds to the original version of the dataset.
100
 
101
- When building the dataset, we sent every image URL to the Spawning AI API and removed all the opted-out images.
102
- However, we noticed afterward that some images might not be opted-out, but the whole web page containing them is.
103
- This is why we created another config of the dataset to additionally filter out opted-out web pages, that can be loaded with `ds = load_dataset("HuggingFaceM4/OBELISC", config_name="opt_out_docs_removed")`.
104
 
105
- ### Visualization of OBELISC documents
106
 
107
- https://huggingface.co/spaces/HuggingFaceM4/obelisc_visualization
108
 
109
- ### Research paper
110
 
111
- https://arxiv.org/abs/2306.16527
112
 
113
- ### GitHub repository
114
 
115
- https://github.com/huggingface/OBELISC
116
 
117
  ## Terms of Use
118
 
@@ -126,10 +123,12 @@ License CC-BY-4.0.
126
 
127
  If you are using this dataset, please cite
128
  ```
129
- @inproceedings{
130
- lauren{\c{c}}on2023obe,
131
- title={OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
132
- author={Hugo Lauren{\c{c}}on and Lucile Saulnier and L{\'e}o Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
133
- year={2023}
 
 
134
  }
135
- ```
 
56
  - **Paper: OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents**
57
  - **Point of Contact: [email protected]**
58
 
59
+ `OBELISC` is an open, massive and curated collection of interleaved image-text web documents, containing 141M English documents, 115B text tokens and 353M images, extracted from Common Crawl dumps between February 2020 and February 2023. The collection and filtering steps are described in our [paper](https://huggingface.co/papers/2306.16527).
60
 
61
+ Interleaved image-text web documents are a succession of text paragraphs interleaved by images, such as web pages that contain images. Models trained on these web documents outperform vision and language models trained solely on image-text pairs on various benchmarks. They can also generate long and coherent text about a set of multiple images. As an example, we trained [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-80b), a visual language model that accepts arbitrary sequences of image and text inputs and produces text outputs.
62
 
63
+ We provide an [interactive visualization](TODO once we have final link public) of OBELICS that allows exploring the content of OBELICS. The map shows a subset of 11M of the 141M documents.
64
 
65
+ [![OBELICS Nomic map](assets/nomic_map.png)](www.google.com)
66
+ TODO:change the link once we have the final public link
 
67
 
68
  ## Data Fields
69
 
70
+ An example of sample looks as follows:
71
+ ```
72
+ # The example has been cropped
73
+
74
+ {
75
+ 'images': [
76
+ 'https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg',
77
+ None
78
+ ],
79
+ 'metadata': '[{"document_url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "unformatted_src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "src": "https://cdn.motor1.com/images/mgl/oRKO0/s1/lamborghini-urus-original-carbon-fiber-accessories.jpg", "formatted_filename": "lamborghini urus original carbon fiber accessories", "alt_text": "VW Group Allegedly Receives Offer To Sell Lamborghini For $9.2 Billion", "original_width": 1920, "original_height": 1080, "format": "jpeg"}, null]',
80
+ 'general_metadata': '{"url": "https://lamborghinichat.com/forum/news/vw-group-allegedly-receives-offer-to-sell-lamborghini-for-9-2-billion.728/", "warc_filename": "crawl-data/CC-MAIN-2021-25/segments/1623488528979.69/warc/CC-MAIN-20210623011557-20210623041557-00312.warc.gz", "warc_record_offset": 322560850, "warc_record_length": 17143}',
81
+ 'texts': [
82
+ None,
83
+ 'The buyer would get everything, including Lambo\'s headquarters.\n\nThe investment groupQuantum Group AG has submitted a€7.5 billion ($9.2 billion at current exchange rates) offer to purchase Lamborghini from Volkswagen Group, Autocar reports. There\'s no info yet about whether VW intends to accept the offer or further negotiate the deal.\n\nQuantum ... Group Chief Executive Herbert Diess said at the time.'
84
+ ]
85
+ }
86
+ ```
87
 
88
+ Each sample is composed of the same 4 fields: `images`, `texts`, `metadata` and `general_metadata`. `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`. For example, for the interleaved web document `<image_1>text<image_2>`, we would find `[image_1, None, image_2]` in `images` and `[None, text, None]` in `texts`.
89
 
90
+ The images are replaced by their URLs, and the users need to download the images, for instance with the library [img2dataset](https://github.com/rom1504/img2dataset).
91
 
92
+ `metadata` is the string representation of a list containing informations about each of the images. It has the same length as `texts` and `images` and logs for each the image relevant information such as original source document, unformatted source, alternative text if present, etc.
93
 
94
+ `general_metadata` is the string representation of a dictionary containing the URL of the document, and information regarding the extraction from Common Crawl snapshots.
95
 
96
+ ## Size and Data Splits
 
 
97
 
98
+ There is only one split, `train`, that contains 141,047,697 documents.
 
 
99
 
100
+ `OBELISC` with images replaced by their URLs weights 666.6 GB (😈) in arrow format and 377 GB in the uploaded `parquet` format.
 
 
101
 
102
+ ## Opted-out content
103
 
104
+ To respect the preferences of content creators, we removed from OBELICS all images for which creators explicitly opted out of AI model training. We used the [Spawning API](https://api.spawning.ai/spawning-api) to verify that the images in the dataset respect the original copyright owners’ choices.
105
 
106
+ However, due to an error on our side, we did not remove entire documents (i.e. URLs) which are opted out of AI model training. As of July 2023 (TODO verify), we estimate that it represents 4 to 5% of the totality of OBELICS. The config `opt_out_docs_removed` (TODO) applies the correct filtering at the web document level as of July 2023: `ds = load_dataset("HuggingFaceM4/OBELISC", config_name="opt_out_docs_removed")` (TODO fix).
107
 
108
+ We recommend users of OBELICS to regularly check every document against the API.
109
 
110
+ ## Content warnings
111
 
112
+ Despite our efforts on filtering, OBELICS contains a small proportion of documents that are not suitable for all audience. For instance, while navigating the interative map, you might find the cluster named "Sex" which predominantly contains description of pornographic movies along with pornographic images. Other clusters would contain advertising for sex workers, or report of violent shootings. In our experience, these documents represent a small proportion of all the documents.
113
 
114
  ## Terms of Use
115
 
 
123
 
124
  If you are using this dataset, please cite
125
  ```
126
+ @misc{laurençon2023obelisc,
127
+ title={OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
128
+ author={Hugo Laurençon and Lucile Saulnier and Léo Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M. Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
129
+ year={2023},
130
+ eprint={2306.16527},
131
+ archivePrefix={arXiv},
132
+ primaryClass={cs.IR}
133
  }
134
+ ```
assets/nomic_map.png ADDED

Git LFS Details

  • SHA256: 9531025e2f1baff9a7d9127e37710d1ae7c098894743dd0d281896a0cf2abe05
  • Pointer size: 132 Bytes
  • Size of remote file: 8.38 MB