--- license: cc0-1.0 size_categories: - n>1T multilinguality: - multilingual source_datasets: - original task_categories: - fill-mask - text-generation task_ids: - language-modeling paperswithcode_id: oscar pretty_name: Community-OSCAR extra_gated_prompt: >- By filling the form below I understand that Community-OSCAR is just a partial annotation of the WET files of 44 Common Crawl snapshots, the original data is included here **only for convenience**, and specially for researchers looking for data in lower resource languages. **Only the annotations are distributed under a cc0-1.0 license**, for the rest of the content I have read the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/) and I will abide by them. I understand that all uses of the textual content in Community-OSCAR are subject to the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/). I understand that reusing the textual content in Community-OSCAR might not be legal in all countries/regions and for all use cases. I understand that Community-OSCAR is mainly targeted towards researchers and meant to be used in research. The OSCAR Project reserves the right to revoke my access to this data. The OSCAR Project reserves the right to modify this data at any time in accordance to take down requests. extra_gated_fields: Name: text Email: text Affiliation: text Country: text Usecase: text I have explicitly checked that downloading Community-OSCAR is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the Common Crawl Terms of use: checkbox --- # Community OSCAR ![image/png](https://huggingface.co/datasets/malteos/images/resolve/main/community-oscar.large.png) The OSCAR project (**O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing large quantities of unannotated raw data that is commonly used in the pre-training of large deep learning models. The OSCAR project has developed [high-performance data pipelines](https://github.com/oscar-corpus/ungoliant) specifically conceived to classify and filter large amounts of [web data](https://commoncrawl.org/). The project has also put special attention in improving the data quality of web-based corpora as well as providing data for low-resource languages, so that these new ML/AI technologies are accessible to as many communities as possible. **Update 2027/11/14** Added the latest 2024-38 dump bringing the total to 45 dumps. **Update 2024/09/22** Added 3 new dumps: - 2024-33 - 2024-30 - 2024-05 bringing the total to 45 and finishing the coverage of all CommonCrawl releases from 2020-2024 **Community-OSCAR** is an unofficial version of the OSCAR Corpus created by community members. The annotation schema follows the [OSCAR 23.01 release](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301) but is based on 45 monthly dumps of Common Crawl ranging from 2024-22 to 2014-42. With these forty dumps, Community-OSCAR is the largest release of the OSCAR Corpus so far. The annotations contain features, including KenLM-based adult content detection, precomputed Locality-Sensitive Hashes for near deduplication, and blocklist-based categories. Community-OSCAR is distributed as JSONL-files with Zstandard compression. You might already have zstd installed on your system, but if not, please check the Zstandard website for installation instructions. ## Ongoing Community Effort Community-OSCAR is a release created by members of the OSCAR community and part of an ongoing effort in close collaboration with the [Occiglot research collective](https://occiglot.eu/). We are working on extending this release to all publicly available Common Crawl dumps, producing a filtered version (see [Occiglot-FineWeb](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5)), and have plenty of ideas on further improvements. If you want to support our activities and collaborate with us, please join the Discord server from the [OSCAR project](https://discord.com/invite/4JNg9FTar4) or [Occiglot research collective](https://discord.gg/wUpvYs4XvM). ## Downloading the Data You can stream the data directly into `datasets` using a script like this ```python from datasets import load_dataset # Load afrkaans subset from 2024-22 snapshot ds = load_dataset('oscar-corpus/community-oscar', data_files='data/2024-22/af_meta/*.jsonl.zst', split='train', streaming=True) ``` Alternatively you can download the data using `huggingface_hub` [python library](https://huggingface.co/docs/huggingface_hub/index). If you want to download a considerable amount of data we recomend you use `hf_transfer` python package and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER=1`. ## Supported Tasks and Leaderboards OSCAR is mainly intended to pre-train language models and word representations. **NOTE:** Community-OSCAR contains the raw unfiltered Common Crawl text data but with quality annotations. For language model training, we highly recommend filtering the data first with these annotations. A prefiltered version of the dataset will be released in the near future (following the approach from [Occiglot-FineWeb](https://huggingface.co/datasets/occiglot/occiglot-fineweb-v0.5)). ## Data Annotations Each sample comes with a series of annotations that allow the removal of low quality data. - `identification`: Language identification based on fastText. - `harmful_pp`: This perplexity comes from a [KenLM model](https://kheafield.com/code/kenlm/) trained on harmful content, previously gathered by using the adult annotation in OSCAR 22.01. In other terms, the lower it is, the more likely a given document contains harmful/adult content. - `tlsh`: We use TLSH to compute a hash for each document. Locality sensitive hashing is a hashing method that computes similar hashes for similar documents. - `quality_warnings`: Computed through heuristics (see below). - `categories`: Content categories arom a [URL-based blocklist](https://dsi.ut-capitole.fr/blacklists/index_en.php) The annotation schema is the same as in the [OSCAR 23.01 release](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301). ### Quality Warnings * `tiny`: The document has a low (<5) number of lines. * `short_sentences`: The document has a high number (>50%) of short lines (<400 bytes) * `header`: The document has a high number of short lines at its head, suggesting the presence of low quality content. * `footer`: The document has a high number of short lines at its tail, suggesting the presence of low quality content. * `noisy`: The document has a high percentage of punctuation (>50%) * `adult`: The document contains adult content. This annotation uses a blocklist and labels a tiny part of the corpus: It does not catch most of the adult content. More information about the thresholds and annotators are present in the [OSCAR paper](https://oscar-project.org/publication/2022/arxiv/towards/). ## Data Format The data is stored as ZSTD-compressed JSON line files. Each individual data sample has the following JSON schema: ```js { "content":"English sentence\nphrase en français\n????????????", // (1) "warc_headers":{ // (2) "warc-identified-content-language":"fra,eng", "warc-target-uri":"https://fr.wikipedia.org/wiki/...", "warc-record-id":"", "warc-type":"conversion", "content-length":"35298", // (3) "warc-refers-to":"", "warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3) "warc-date":"2022-11-26T09:45:47Z", "content-type":"text/plain" }, "metadata":{ "identification":{ // (4) "label":"fr", "prob":0.8938327 }, "harmful_pp":4063.1814, // (5) "tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6) "quality_warnings":[ // (7) "short_sentences", "header", "footer" ], "categories":[ // (8) "examen_pix", "liste_bu" ], "sentence_identifications":[ // (9) { "label":"fr", "prob":0.99837273 }, { "label":"en", "prob":0.9992377 }, null ] } } ``` ## Language Statistics All the data is distributed by language and release name. Up to 151 different languages are available. The table below provides the language code as well as the number of documents, compressed data sizes, number of words (by whitespaces) and characters. The statistics are computed based on uncompressed data and on estimates calculated on a subset of 12 releases and extrapolated to all 41 releases (snapshots). Roughly half of the data is English, leaving 19T non-english words. | lang | Language | Data (avg./release) | #Docs (avg./release) | #Words (avg./release) | #Characters (avg./release) | Data (Total) | Lines (Total) | Words (Total) | Characters (Total) | |:-------|:----------------------------|:-----------|:--------|:---------|:--------|:---------------|:----------------|:-----------------|:----------------| | **Total** | | **6.78TiB** | **1.13B** | **935.22B** | **6.12T**| **278.01TiB** | **46.46B** | **38.34T** | **251.03T** | | en | English | 2.56TiB | 483.20M | 465.20B | 2.80T | 105.16TiB | 19.81B | 19.07T | 114.75T | | **Total (w/o en)** | | **4.22TiBB** | **650.05M** | **470.02B** | **3.32T**| **172.85TiB** | **26.65B** | **19.27T** | **136.27T** | | ru | Russian | 982.11GiB | 94.21M | 80.68B | 593.06B | 39.32TiB | 3.86B | 3.31T | 24.32T | | zh | Chinese | 590.68GiB | 67.47M | 15.71B | 242.67B | 23.65TiB | 2.77B | 644.28B | 9.95T | | de | German | 405.00GiB | 79.64M | 58.54B | 427.68B | 16.22TiB | 3.27B | 2.40T | 17.53T | | es | Spanish | 349.22GiB | 58.54M | 59.59B | 367.67B | 13.98TiB | 2.40B | 2.44T | 15.07T | | fr | French | 306.61GiB | 57.83M | 50.73B | 317.41B | 12.28TiB | 2.37B | 2.08T | 13.01T | | it | Italian | 184.18GiB | 31.41M | 29.58B | 195.26B | 7.37TiB | 1.29B | 1.21T | 8.01T | | ja | Japanese | 165.97GiB | 42.67M | 5.72B | 72.52B | 6.65TiB | 1.75B | 234.47B | 2.97T | | pt | Portuguese | 151.63GiB | 26.79M | 25.09B | 157.95B | 6.07TiB | 1.10B | 1.03T | 6.48T | | pl | Polish | 123.65GiB | 22.79M | 17.70B | 126.32B | 4.95TiB | 934.43M | 725.54B | 5.18T | | nl | Dutch | 91.28GiB | 24.44M | 15.17B | 97.54B | 3.65TiB | 1.00B | 621.77B | 4.00T | | vi | Vietnamese | 83.07GiB | 11.49M | 14.80B | 69.26B | 3.33TiB | 471.02M | 606.86B | 2.84T | | th | Thai | 69.61GiB | 5.39M | 2.50B | 29.73B | 2.79TiB | 220.82M | 102.43B | 1.22T | | el | Greek | 68.44GiB | 7.33M | 6.24B | 42.67B | 2.74TiB | 300.68M | 255.91B | 1.75T | | tr | Turkish | 68.31GiB | 13.11M | 8.95B | 67.37B | 2.74TiB | 537.64M | 366.80B | 2.76T | | fa | Persian | 66.77GiB | 8.60M | 7.76B | 40.94B | 2.67TiB | 352.65M | 318.20B | 1.68T | | ar | Arabic | 61.53GiB | 8.57M | 6.28B | 37.83B | 2.46TiB | 351.41M | 257.37B | 1.55T | | cs | Czech | 54.99GiB | 12.97M | 8.07B | 53.42B | 2.20TiB | 531.97M | 330.77B | 2.19T | | hu | Hungarian | 44.35GiB | 7.43M | 5.80B | 43.57B | 1.78TiB | 304.66M | 237.74B | 1.79T | | uk | Ukrainian | 44.22GiB | 5.21M | 3.65B | 26.80B | 1.77TiB | 213.63M | 149.61B | 1.10T | | sv | Swedish | 42.76GiB | 9.03M | 6.95B | 43.99B | 1.71TiB | 370.36M | 284.99B | 1.80T | | bg | Bulgarian | 33.90GiB | 3.49M | 3.21B | 20.78B | 1.36TiB | 142.90M | 131.57B | 852.02B | | ro | Romanian | 33.50GiB | 4.65M | 5.29B | 34.25B | 1.34TiB | 190.83M | 216.89B | 1.40T | | ko | Korean | 32.35GiB | 6.64M | 3.31B | 15.73B | 1.30TiB | 272.25M | 135.84B | 644.73B | | fi | Finnish | 29.79GiB | 5.48M | 3.63B | 30.82B | 1.19TiB | 224.50M | 148.72B | 1.26T | | he | Hebrew | 28.17GiB | 3.74M | 2.98B | 17.41B | 1.13TiB | 153.25M | 122.19B | 713.96B | | hi | Hindi | 21.48GiB | 1.77M | 1.86B | 10.08B | 880.81GiB | 72.38M | 76.09B | 413.17B | | id | Indonesian | 14.67GiB | 2.76M | 2.25B | 15.71B | 601.30GiB | 113.09M | 92.12B | 644.31B | | lt | Lithuanian | 12.72GiB | 2.36M | 1.68B | 12.83B | 521.35GiB | 96.72M | 69.01B | 526.16B | | bn | Bangla | 12.68GiB | 1.26M | 834.94M | 5.49B | 520.02GiB | 51.81M | 34.23B | 225.10B | | sk | Slovak | 12.19GiB | 2.85M | 1.77B | 12.03B | 499.80GiB | 116.77M | 72.49B | 493.20B | | da | Danish | 11.38GiB | 2.87M | 1.93B | 11.90B | 466.69GiB | 117.50M | 79.22B | 488.04B | | ca | Catalan | 11.16GiB | 2.81M | 1.86B | 11.63B | 457.38GiB | 115.40M | 76.42B | 476.63B | | ta | Tamil | 10.33GiB | 574763 | 559.30M | 4.59B | 423.46GiB | 23.57M | 22.93B | 188.20B | | multi | - | 8.85GiB | 1.22M | 1.05B | 7.14B | 362.94GiB | 49.85M | 43.22B | 292.56B | | ka | Georgian | 7.31GiB | 592883 | 387.90M | 3.14B | 299.82GiB | 24.31M | 15.90B | 128.83B | | et | Estonian | 7.24GiB | 1.59M | 976.06M | 7.53B | 296.66GiB | 65.37M | 40.02B | 308.78B | | sr | Serbian | 7.19GiB | 704978 | 675.30M | 4.44B | 294.65GiB | 28.90M | 27.69B | 182.02B | | lv | Latvian | 7.12GiB | 1.21M | 936.00M | 7.03B | 291.95GiB | 49.69M | 38.38B | 288.11B | | hy | Armenian | 4.43GiB | 424969 | 348.89M | 2.70B | 181.46GiB | 17.42M | 14.30B | 110.56B | | ml | Malayalam | 4.32GiB | 291610 | 198.34M | 1.85B | 177.15GiB | 11.96M | 8.13B | 75.82B | | az | Azerbaijani | 3.36GiB | 603832 | 408.83M | 3.12B | 137.62GiB | 24.76M | 16.76B | 128.07B | | te | Telugu | 3.34GiB | 267406 | 194.03M | 1.49B | 136.82GiB | 10.96M | 7.96B | 60.95B | | kk | Kazakh | 3.24GiB | 320143 | 239.00M | 1.93B | 132.95GiB | 13.13M | 9.80B | 79.00B | | ne | Nepali | 3.17GiB | 400031 | 204.06M | 1.33B | 130.11GiB | 16.40M | 8.37B | 54.35B | | mr | Marathi | 3.00GiB | 251313 | 194.56M | 1.32B | 122.82GiB | 10.30M | 7.98B | 54.26B | | ur | Urdu | 2.66GiB | 342973 | 342.83M | 1.66B | 109.20GiB | 14.06M | 14.06B | 68.00B | | mk | Macedonian | 2.57GiB | 379079 | 243.56M | 1.58B | 105.49GiB | 15.54M | 9.99B | 64.72B | | sq | Albanian | 2.48GiB | 496860 | 407.12M | 2.48B | 101.56GiB | 20.37M | 16.69B | 101.67B | | my | Burmese | 2.36GiB | 173587 | 79.12M | 968.68M | 96.60GiB | 7.12M | 3.24B | 39.72B | | gu | Gujarati | 2.31GiB | 128983 | 194.10M | 1.15B | 94.72GiB | 5.29M | 7.96B | 47.23B | | kn | Kannada | 2.04GiB | 153938 | 117.35M | 951.65M | 83.81GiB | 6.31M | 4.81B | 39.02B | | be | Belarusian | 2.00GiB | 234797 | 163.91M | 1.21B | 82.05GiB | 9.63M | 6.72B | 49.51B | | no | Norwegian | 1.97GiB | 1.11M | 320.46M | 2.07B | 80.65GiB | 45.70M | 13.14B | 84.72B | | is | Icelandic | 1.89GiB | 479673 | 285.65M | 1.83B | 77.44GiB | 19.67M | 11.71B | 75.23B | | mn | Mongolian | 1.87GiB | 213630 | 162.46M | 1.15B | 76.86GiB | 8.76M | 6.66B | 47.35B | | km | Khmer | 1.79GiB | 140645 | 40.87M | 714.22M | 73.31GiB | 5.77M | 1.68B | 29.28B | | si | Sinhala | 1.78GiB | 112624 | 134.15M | 844.35M | 73.06GiB | 4.62M | 5.50B | 34.62B | | sl | Slovenian | 1.01GiB | 445779 | 156.82M | 1.06B | 41.59GiB | 18.28M | 6.43B | 43.45B | | tg | Tajik | 988.00MiB | 73726 | 83.29M | 576.25M | 39.56GiB | 3.02M | 3.41B | 23.63B | | eu | Basque | 806.16MiB | 262487 | 104.16M | 842.18M | 32.28GiB | 10.76M | 4.27B | 34.53B | | pa | Punjabi | 780.76MiB | 71240 | 65.56M | 350.37M | 31.26GiB | 2.92M | 2.69B | 14.36B | | tt | Tatar | 684.17MiB | 77931 | 56.24M | 402.50M | 27.39GiB | 3.20M | 2.31B | 16.50B | | ckb | Central Kurdish | 624.12MiB | 93207 | 52.42M | 360.95M | 24.99GiB | 3.82M | 2.15B | 14.80B | | ky | Kyrgyz | 489.66MiB | 72858 | 36.24M | 286.59M | 19.61GiB | 2.99M | 1.49B | 11.75B | | tl | Filipino | 460.45MiB | 74172 | 77.36M | 480.16M | 18.44GiB | 3.04M | 3.17B | 19.69B | | am | Amharic | 436.81MiB | 44448 | 38.29M | 205.38M | 17.49GiB | 1.82M | 1.57B | 8.42B | | eo | Esperanto | 411.92MiB | 107893 | 64.90M | 421.28M | 16.49GiB | 4.42M | 2.66B | 17.27B | | or | Odia | 364.42MiB | 54702 | 24.03M | 157.21M | 14.59GiB | 2.24M | 985.22M | 6.45B | | bo | Tibetan | 335.38MiB | 22403 | 4.47M | 126.55M | 13.43GiB | 918536 | 183.15M | 5.19B | | ps | Pashto | 293.88MiB | 45796 | 37.31M | 178.25M | 11.77GiB | 1.88M | 1.53B | 7.31B | | lo | Lao | 286.92MiB | 35277 | 7.46M | 113.82M | 11.49GiB | 1.45M | 305.88M | 4.67B | | cy | Welsh | 268.65MiB | 78645 | 46.25M | 278.10M | 10.76GiB | 3.22M | 1.90B | 11.40B | | ug | Uyghur | 212.29MiB | 22044 | 15.59M | 121.98M | 8.50GiB | 903834 | 639.20M | 5.00B | | dv | Divehi | 207.92MiB | 30485 | 13.36M | 119.18M | 8.33GiB | 1.25M | 547.74M | 4.89B | | as | Assamese | 202.40MiB | 18951 | 13.49M | 88.34M | 8.10GiB | 776997 | 552.99M | 3.62B | | gl | Galician | 195.43MiB | 96533 | 31.92M | 199.39M | 7.82GiB | 3.96M | 1.31B | 8.18B | | yi | Yiddish | 157.11MiB | 20405 | 15.33M | 94.83M | 6.29GiB | 836632 | 628.66M | 3.89B | | ba | Bashkir | 155.53MiB | 23672 | 12.54M | 92.19M | 6.23GiB | 970562 | 514.29M | 3.78B | | ku | Kurdish | 120.61MiB | 31901 | 19.63M | 114.28M | 4.83GiB | 1.31M | 805.03M | 4.69B | | sd | Sindhi | 110.46MiB | 14589 | 13.74M | 67.60M | 4.42GiB | 598183 | 563.48M | 2.77B | | hr | Croatian | 88.36MiB | 14480 | 12.35M | 88.91M | 3.54GiB | 593680 | 506.36M | 3.65B | | sa | Sanskrit | 87.88MiB | 7776 | 4.39M | 34.88M | 3.52GiB | 318843 | 180.14M | 1.43B | | pnb | Western Panjabi | 53.99MiB | 8182 | 6.47M | 32.91M | 2.16GiB | 335482 | 265.46M | 1.35B | | sah | Yakut | 50.51MiB | 7804 | 3.55M | 29.19M | 2.02GiB | 319984 | 145.57M | 1.20B | | fy | Western Frisian | 50.19MiB | 23703 | 7.91M | 50.53M | 2.01GiB | 971826 | 324.19M | 2.07B | | cv | Chuvash | 41.20MiB | 6514 | 3.40M | 24.15M | 1.65GiB | 267111 | 139.23M | 990.10M | | ga | Irish | 33.83MiB | 15186 | 5.41M | 33.13M | 1.35GiB | 622656 | 221.81M | 1.36B | | ceb | Cebuano | 30.24MiB | 4924 | 4.79M | 31.23M | 1.21GiB | 201918 | 196.19M | 1.28B | | af | Afrikaans | 27.95MiB | 12458 | 5.07M | 28.89M | 1.12GiB | 510784 | 207.74M | 1.18B | | br | Breton | 26.95MiB | 22426 | 4.53M | 27.41M | 1.08GiB | 919479 | 185.89M | 1.12B | | os | Ossetic | 19.94MiB | 6503 | 1.70M | 11.90M | 817.46MiB | 266636 | 69.59M | 488.10M | | uz | Uzbek | 16.15MiB | 13478 | 1.97M | 16.45M | 662.13MiB | 552625 | 80.91M | 674.62M | | azb | South Azerbaijani | 14.83MiB | 7798 | 1.19M | 8.69M | 607.84MiB | 319721 | 48.88M | 356.10M | | lb | Luxembourgish | 13.08MiB | 7289 | 1.96M | 13.28M | 536.38MiB | 298849 | 80.38M | 544.40M | | mg | Malagasy | 12.88MiB | 3983 | 1.85M | 13.45M | 527.98MiB | 163309 | 76.00M | 551.43M | | mhr | Eastern Mari | 10.49MiB | 2345 | 830225 | 6.13M | 430.25MiB | 96145 | 34.04M | 251.16M | | nds | Low German | 9.58MiB | 2046 | 1.58M | 9.71M | 392.94MiB | 83916 | 64.77M | 398.20M | | ce | Chechen | 8.88MiB | 3313 | 735487 | 5.22M | 363.94MiB | 135870 | 30.15M | 214.08M | | xmf | Mingrelian | 7.23MiB | 2959 | 393400 | 3.15M | 296.34MiB | 121322 | 16.13M | 129.24M | | new | Newari | 5.02MiB | 916 | 324361 | 2.08M | 205.73MiB | 37569 | 13.30M | 85.38M | | sh | Serbian | 4.22MiB | 1019 | 1.02M | 4.28M | 173.18MiB | 41779 | 41.69M | 175.64M | | ms | Malay | 4.02MiB | 4988 | 382700 | 3.32M | 164.63MiB | 204511 | 15.69M | 136.14M | | min | Minangkabau | 3.63MiB | 953 | 324454 | 2.07M | 148.63MiB | 39103 | 13.30M | 84.97M | | nn | Norwegian Nynorsk | 3.30MiB | 8285 | 553667 | 3.37M | 135.40MiB | 339685 | 22.70M | 138.31M | | tk | Turkmen | 2.41MiB | 1662 | 269719 | 2.28M | 98.64MiB | 68162 | 11.06M | 93.31M | | gom | Goan Konkani | 2.14MiB | 135 | 127985 | 891404 | 87.80MiB | 5541 | 5.25M | 36.55M | | arz | Egyptian Arabic | 2.11MiB | 1482 | 234425 | 1.26M | 86.36MiB | 60789 | 9.61M | 51.84M | | bpy | Bishnupriya | 1.95MiB | 379 | 130226 | 824058 | 80.09MiB | 15566 | 5.34M | 33.79M | | la | Latin | 1.61MiB | 4738 | 276095 | 1.68M | 66.12MiB | 194281 | 11.32M | 69.02M | | pms | Piedmontese | 1.49MiB | 466 | 264178 | 1.47M | 61.22MiB | 19112 | 10.83M | 60.40M | | jbo | Lojban | 1.30MiB | 276 | 277677 | 1.35M | 53.38MiB | 11329 | 11.38M | 55.40M | | mt | Maltese | 1.17MiB | 2544 | 145437 | 1.17M | 48.00MiB | 104324 | 5.96M | 48.04M | | oc | Occitan | 1020.71KiB | 280 | 38821 | 1.03M | 40.87MiB | 11510 | 1.59M | 42.35M | | war | Waray | 997.57KiB | 328 | 150252 | 1.02M | 39.94MiB | 13468 | 6.16M | 41.73M | | vo | Volapük | 935.64KiB | 572 | 140394 | 888363 | 37.46MiB | 23482 | 5.76M | 36.42M | | ast | Asturian | 599.07KiB | 1742 | 90259 | 592203 | 23.99MiB | 71459 | 3.70M | 24.28M | | lez | Lezghian | 454.96KiB | 145 | 35011 | 259634 | 18.22MiB | 5962 | 1.44M | 10.65M | | mrj | Western Mari | 419.70KiB | 130 | 33093 | 242203 | 16.80MiB | 5340 | 1.36M | 9.93M | | su | Sundanese | 394.30KiB | 35 | 61531 | 388935 | 15.79MiB | 1438 | 2.52M | 15.95M | | sw | Swahili | 376.35KiB | 683 | 64274 | 384592 | 15.07MiB | 28033 | 2.64M | 15.77M | | gsw | Swiss German | 367.62KiB | 163 | 54335 | 352731 | 14.72MiB | 6700 | 2.23M | 14.46M | | wuu | Wu Chinese | 232.88KiB | 86 | 6765 | 86236 | 9.32MiB | 3532 | 277388 | 3.54M | | wa | Walloon | 171.79KiB | 52 | 3278 | 175434 | 6.88MiB | 2132 | 134435 | 7.19M | | gd | Scottish Gaelic | 91.63KiB | 253 | 10844 | 91821 | 3.67MiB | 10403 | 444624 | 3.76M | | mzn | Mazanderani | 88.55KiB | 50 | 9282 | 50922 | 3.55MiB | 2080 | 380582 | 2.09M | | hsb | Upper Sorbian | 85.59KiB | 134 | 11768 | 80704 | 3.43MiB | 5504 | 482505 | 3.31M | | ia | Interlingua | 80.21KiB | 37 | 23253 | 81382 | 3.21MiB | 1527 | 953407 | 3.34M | | krc | Karachay-Balkar | 58.04KiB | 83 | 4072 | 32574 | 2.32MiB | 3420 | 166969 | 1.34M | | kv | Komi | 25.03KiB | 67 | 2182 | 14450 | 1.00MiB | 2760 | 89479 | 592456 | | av | Avaric | 23.69KiB | 27 | 1440 | 13157 | 971.31KiB | 1113 | 59053 | 539450 | | jv | Javanese | 18.04KiB | 46 | 2534 | 17534 | 739.83KiB | 1886 | 103928 | 718917 | | ilo | Iloko | 15.43KiB | 39 | 2554 | 15737 | 632.47KiB | 1599 | 104737 | 645234 | | li | Limburgish | 13.46KiB | 2 | 77 | 13762 | 552.02KiB | 118 | 3161 | 564260 | | mai | Maithili | 11.20KiB | 18 | 1455 | 5092 | 459.00KiB | 738 | 59689 | 208806 | | yo | Yoruba | 10.89KiB | 43 | 1502 | 7585 | 446.46KiB | 1773 | 61595 | 311008 | | lmo | Lombard | 9.66KiB | 26 | 1582 | 9200 | 396.21KiB | 1066 | 64896 | 377227 | | an | Aragonese | 7.55KiB | 11 | 264 | 4311 | 309.74KiB | 481 | 10851 | 176751 | | bar | Bavarian | 6.82KiB | 26 | 1743 | 3543 | 279.77KiB | 1090 | 71479 | 145287 | | io | Ido | 6.51KiB | 43 | 1114 | 6633 | 266.79KiB | 1797 | 45674 | 271970 | | bh | Bihari languages | 5.90KiB | 21 | 465 | 2325 | 241.72KiB | 866 | 19065 | 95354 | | bxr | Russia Buriat | 5.16KiB | 23 | 470 | 3036 | 211.74KiB | 951 | 19278 | 124476 | | bs | Bosnian | 3.85KiB | 8 | 474 | 3802 | 157.87KiB | 328 | 19464 | 155912 | | ie | Interlingue | 3.06KiB | 1 | 722 | 2967 | 125.60KiB | 61 | 29602 | 121647 | | so | Somali | 2.26KiB | 18 | 647 | 2114 | 92.64KiB | 738 | 26556 | 86692 | | xal | Kalmyk | 2.02KiB | 6 | 189 | 1184 | 82.73KiB | 280 | 7769 | 48544 | | nah | Nahuatl languages | 1.80KiB | 13 | 192 | 1690 | 74.00KiB | 553 | 7884 | 69290 | | gn | Guarani | 1.51KiB | 5 | 191 | 1375 | 61.90KiB | 222 | 7836 | 56386 | | ht | Haitian Creole | 85.38B | 1 | 118 | 681 | 27.35KiB | 41 | 4858 | 27921 | | kw | Cornish | 70.14B | 3 | 104 | 559 | 22.47KiB | 127 | 4272 | 22931 | | x-eml | Unknown language [x-eml] | 61.71B | 1 | 85 | 445 | 19.77KiB | 41 | 3485 | 18258 | | lrc | Northern Luri | 52.33B | 1 | 41 | 232 | 16.76KiB | 68 | 1681 | 9512 | | dsb | Lower Sorbian | 36.08B | 1 | 37 | 260 | 11.56KiB | 68 | 1517 | 10687 | | rue | Rusyn | 29.75B | 1 | 8 | 130 | 9.53KiB | 41 | 328 | 5330 | | scn | Sicilian | 27.12B | 1 | 37 | 204 | 8.69KiB | 49 | 1525 | 8380 | | qu | Quechua | 26.51B | 1 | 24 | 207 | 8.49KiB | 59 | 997 | 8487 | | vec | Venetian | 20.00B | 1 | 34 | 151 | 6.41KiB | 41 | 1394 | 6191 | | diq | Dimli (individual language) | 18.46B | 1 | 19 | 132 | 5.91KiB | 41 | 779 | 5425 | | rm | Romansh | 13.00B | 1 | 10 | 104 | 4.16KiB | 41 | 410 | 4264 | | | - | 0.00B | 0 | 0 | 0 | 0.00B | 0 | 0 | 0 | ### Issues Community-OSCAR may have quality issues on low size subcorpora, as it has been the case before. Please consider taking a look at [_Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets_](https://aclanthology.org/2022.tacl-1.4/) to get a better understanding of the current limitations of our language classifier. Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus. As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic. **If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.** ## Dataset Creation ### Curation Rationale OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText). The pipeline works on documents rather than lines. `Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy. Threading is done at shard, record and sentence level, making the whole generation process much more efficient. Filtering will be explained in a future blog post at our [website](https://oscar-project.org) ### Source Data [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organization's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies. Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics. To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of Community-OSCAR the following forty monthly Common Crawl snapshots were used: ``` 2024-33 2024-30 2024-22 2024-18 2024-10 2023-50 2023-40 2023-23 2023-14 2023-06 2022-49 2022-40 2022-33 2022-27 2022-21 2022-05 2021-49 2021-43 2021-39 2021-31 2021-25 2021-21 2021-17 2021-10 2021-04 2020-50 2020-45 2020-40 2020-34 2020-29 2020-24 2020-16 2020-10 2020-05 2019-22 2018-47 2018-30 2017-43 2017-13 2016-40 2016-22 2015-48 2015-14 2014-42 ``` ### Who are the source language producers? The data comes from multiple web pages in a large variety of languages. ### Personal and Sensitive Information Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models. ## Considerations for Using the Data ### Social Impact of Dataset OSCAR is intended to bring more data to a wide variety of languages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures. ### Discussion of Biases OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models. We have added annotations to Common Crawl, so please consider using them to select the data that you would like to use for your particular use case. ### Other Known Limitations The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource languages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571). ## Dataset Curators & Contributors Community-OSCAR was put together by community members in close collaboration with the [Occiglot research collective](https://occiglot.eu/). The main contributors are Manuel Brack, Pedro Ortiz Suarez, Malte Ostendorff, Patrick Schramowski, Georg Rehm, Kristian Kersting, Jose Javier Saiz, Iñaki Lacunza Castilla, Alexander Shvets, Jorge Palomar-Giner, and Marta Villegas. Moreover, this release is supported by and was enabled by contributions from the OSCAR team at [Inria](https://www.inria.fr/en) (project-team [ALMAnaCH](https://almanach.inria.fr/index-en.html)), specially by [Julien Abadji](https://ujj.space), [Rua Ismail](https://oscar-project.org/authors/rua/) and [Benoit Sagot](http://pauillac.inria.fr/~sagot/), the [Common Crawl Foundation](https://commoncrawl.org/), the [SLT](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology) and [SAINT](https://www.dfki.de/en/web/research/research-departments/foundations-of-systems-ai) teams at [DFKI](https://www.dfki.de/en/web), [TU Darmstadt](https://www.tu-darmstadt.de/), the [LangTech unit](https://www.bsc.es/discover-bsc/organisation/research-departments/language-technologies-unit) at the [Barcelona Supercomputing Center](https://www.bsc.es/), the [42 supercomputer and Hessian AI](https://hessian.ai/), the [OpenGPT-X project](https://opengpt-x.de/en/), [Fraunhofer](https://www.iais.fraunhofer.de/), [Jülich Supercomputing Centre](https://www.fz-juelich.de/), [TU Dresden](https://tu-dresden.de/zih), [Deutsche Telekom](https://www.telekom.com/), as well as by members of the OSCAR community, in particular [Sotaro Takeshita](https://sotaro.io/about), [Sebastian Nagel](https://www.polver.uni-konstanz.de/cnc/people/nagel/). ## Licensing Information These data are released under this licensing scheme We do not own any of the text from which these data has been extracted. We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ To the extent possible under law, the OSCAR community members have waived all copyright and related or neighboring rights to OSCAR. Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. - Clearly identify the copyrighted work claimed to be infringed. - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. We will comply to legitimate requests by removing the affected sources. Please use the [contact information](https://oscar-project.org/#contact) on our website for take down requests. We strongly advise users to submit take down request to Common Crawl. For more information please read their [Terms of Use](https://commoncrawl.org/terms-of-use/) ## Citation Information If you use our work, please cite the technical report: ```bibtex @misc{brack2024communityoscar, title = {Community OSCAR: A Community Effort for Multilingual Web Data}, author = {Manuel Brack and Malte Ostendorff and Pedro Ortiz Suarez and José Javier Saiz and Iñaki Lacunza Castilla and Jorge Palomar-Giner and Aleksandr Shvets and Patrick Schramowski and Georg Rehm and Marta Villegas and Kristian Kersting}, year = {2024}, howpublished = {technical report}, url = {https://occiglot.eu/papers/Community_Oscar.pdf} } ``` Additionally, please consider these citations that Community OSCAR relies on: ``` @ARTICLE{2022arXiv221210440J, author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro}, title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = 2022, month = dec, eid = {arXiv:2212.10440}, pages = {arXiv:2212.10440}, doi = {10.48550/arXiv.2212.10440}, archivePrefix = {arXiv}, eprint = {2212.10440}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @inproceedings{abadji-etal-2022-towards, title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus", author = "Abadji, Julien and Ortiz Suarez, Pedro and Romary, Laurent and Sagot, Beno{\^\i}t", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.463", pages = "4344--4355", abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.", } @inproceedings{AbadjiOrtizSuarezRomaryetal.2021, author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot}, title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)}, editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Bański and Adrien Barbaresi and Simon Clematide and Ines Pisetta}, publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-10468}, url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688}, pages = {1 -- 9}, year = {2021}, abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.}, language = {en} } @article{kreutzer-etal-2022-quality, title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets", author = {Kreutzer, Julia and Caswell, Isaac and Wang, Lisa and Wahab, Ahsan and van Esch, Daan and Ulzii-Orshikh, Nasanbayar and Tapo, Allahsera and Subramani, Nishant and Sokolov, Artem and Sikasote, Claytone and Setyawan, Monang and Sarin, Supheakmungkol and Samb, Sokhar and Sagot, Beno{\^\i}t and Rivera, Clara and Rios, Annette and Papadimitriou, Isabel and Osei, Salomey and Suarez, Pedro Ortiz and Orife, Iroro and Ogueji, Kelechi and Rubungo, Andre Niyongabo and Nguyen, Toan Q. and M{\"u}ller, Mathias and M{\"u}ller, Andr{\'e} and Muhammad, Shamsuddeen Hassan and Muhammad, Nanda and Mnyakeni, Ayanda and Mirzakhalov, Jamshidbek and Matangira, Tapiwanashe and Leong, Colin and Lawson, Nze and Kudugunta, Sneha and Jernite, Yacine and Jenny, Mathias and Firat, Orhan and Dossou, Bonaventure F. P. and Dlamini, Sakhile and de Silva, Nisansa and {\c{C}}abuk Ball{\i}, Sakine and Biderman, Stella and Battisti, Alessia and Baruwa, Ahmed and Bapna, Ankur and Baljekar, Pallavi and Azime, Israel Abebe and Awokoya, Ayodele and Ataman, Duygu and Ahia, Orevaoghene and Ahia, Oghenefego and Agrawal, Sweta and Adeyemi, Mofetoluwa}, journal = "Transactions of the Association for Computational Linguistics", volume = "10", year = "2022", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/2022.tacl-1.4", doi = "10.1162/tacl_a_00447", pages = "50--72", abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.", } @inproceedings{ortiz-suarez-etal-2020-monolingual, title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages", author = "Ortiz Su{'a}rez, Pedro Javier and Romary, Laurent and Sagot, Benoit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.156", pages = "1703--1714", abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.", } @inproceedings{OrtizSuarezSagotRomary2019, author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary}, title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019}, editor = {Piotr Bański and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi}, publisher = {Leibniz-Institut f{"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-9021}, url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215}, pages = {9 -- 16}, year = {2019}, abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.}, language = {en} } ```