--- pretty_name: HALvest configs: - config_name: bg data_files: "bg/*.gz" - config_name: br data_files: "br/*.gz" - config_name: ca data_files: "ca/*.gz" - config_name: cs data_files: "cs/*.gz" - config_name: da data_files: "da/*.gz" - config_name: de data_files: "de/*.gz" - config_name: el data_files: "el/*.gz" - config_name: en data_files: "en/*.gz" - config_name: eo data_files: "eo/*.gz" - config_name: es data_files: "es/*.gz" - config_name: et data_files: "et/*.gz" - config_name: eu data_files: "eu/*.gz" - config_name: fa data_files: "fa/*.gz" - config_name: fi data_files: "fi/*.gz" - config_name: fr data_files: "fr/*.gz" - config_name: gl data_files: "gl/*.gz" - config_name: he data_files: "he/*.gz" - config_name: hr data_files: "hr/*.gz" - config_name: hu data_files: "hu/*.gz" - config_name: hy data_files: "hy/*.gz" - config_name: id data_files: "id/*.gz" - config_name: it data_files: "it/*.gz" - config_name: ko data_files: "ko/*.gz" - config_name: "no" data_files: "no/*.gz" - config_name: pl data_files: "pl/*.gz" - config_name: pt data_files: "pt/*.gz" - config_name: ro data_files: "ro/*.gz" - config_name: ru data_files: "ru/*.gz" - config_name: sk data_files: "sk/*.gz" - config_name: sl data_files: "sl/*.gz" - config_name: sv data_files: "sv/*.gz" - config_name: sw data_files: "sw/*.gz" - config_name: th data_files: "th/*.gz" - config_name: tr data_files: "tr/*.gz" language: - bg - br - ca - cs - da - de - el - en - eo - es - et - eu - fa - fi - fr - gl - he - hr - hu - hy - id - it - ko - "no" - pl - pt - ro - ru - sk - sl - sv - sw - th - tr size_categories: - n<1K - 1K

HALvest

Open Scientific Papers Harvested from HAL

--- ## Dataset Description - **Repository:** [GitHub](https://github.com/Madjakul/HALvesting/tree/main) ## Dataset Summary ### overview: This dataset is comprised of fulltext from open papers found on [Hyper Articles en Ligne (HAL)](https://hal.science/). Our dump is mostly english/french but gather papers written in 34 languages across 13 domains. You can download the dataset using Hugging Face datasets: ```py from datasets import load_dataset ds = load_dataset("Madjakul/HALvest", "en") ``` ### Details Building the dataset is a four steps process: data fetching from HAL, data merging, data enriching and data filtering. 1. We first request [HAL's API](https://api.archives-ouvertes.fr/docs) in order to gather open research papers and parse it -- effectively sorting papers by language. Then, we download the PDFs of the fetched data. 2. Using [GROBID](https://github.com/kermitt2/grobid), we convert each PDF to an `xml-tei` format in order to have structured data. We convert each `xml-tei` file to a `txt` format before concatenating it with the paper's. 3. We compute some statistics about each document. 4. We filter the data based of off simple ratios to expurge badly encoded documents. ### Languages ISO-639|Language|# Documents|# mT5 Tokens -------|--------|-----------|-------- en|English|442,892|7,606,895,258 fr|French|193,437|8,728,722,255 es|Spanish|2,930|68,076,878 it|Italian|1,172|48,747,986 pt|Portuguese|934|32,918,832 de|German|646|11,699,417 ru|Russian|245|5,763,532 eu|Basque|112|2,297,460 pl|Polish|43|987,878 el|Greek|42|1,680,696 ro|Romanian|39|1,298,901 ca|Catalan|28|975,078 da|Danish|26|961,895 br|Breton|24|998,088 ko|Korean|17|226,268 tr|Turkish|17|149,718 hu|Hungarian|14|577,568 eo|Esperanto|14|105,286 fa|Persian|10|190,929 hy|Armenian|10|127,988 cs|Czech|9|712,263 bg|Bulgarian|8|180,146 id|Indonesian|9|53,075 he|Hebrew|8|61,283 hr|Croatian|8|40,621 et|Estonian|7|20,405 sv|Swedish|6|270,642 no|Norwegian|6|62,767 fi|Finnish|3|17,583 sw|Swahili|2|73,921 gl|Galician|2|29,688 th|Thai|1|70,909 sl|Slovenian|1|22,844 sk|Slovak|1|12,997 ### Domains Domain|Code|# Documents|# mT5 Tokens ------|----|-----------|------------ Humanities and Social Sciences|shs|152,818|5,487,738,344 Computer Science|info|143,229|2,436,890,715 Life Sciences|sdv|111,038|3,008,633,879 Engineering Sciences|spi|99,393|2,155,602,249 Physics|phys|63,557|1,435,905,328 Mathematics|math|54,393|1,359,277,656 Chemical Science|chim|38,500|857,617,219 Environmental Science|sde|30,827|566,560,266 Sciences of the Universe|sdu|22,917|654,909,131 Statistics|stat|20,571|1,449,842,318 Cognitive science|scco|11,584|222,832,732 Quantitative Finance|qfin|3,290|64,970,285 Nonlinear Sciences|nlin|1,908|29,296,684 You can browse through every domains and sub-domains here: https://hal.science/browse/domain. ## Considerations for Using the Data The corpus is extracted from the [HAL's open archive](https://hal.science/) which distributes scientific publications following open access principles. The corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which these data has been extracted. ## Citation ```bib @software{almanach_halvest_2024, author = {Kulumba, Francis and Antoun, Wissam and Vimont, Guillaume and Romary, Laurent}, title = {HALvest: Open Scientific Papers Harvested from HAL.}, month = April, year = 2024, company = Almanach, url = {https://github.com/Madjakul/HALvesting} } ``` ## Dataset Copyright The licence terms for HALvest strictly follows the one from HAL. Please refer to the below license when using this dataset. - [HAL license](https://doc.archives-ouvertes.fr/en/legal-aspects/)